Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
Francesco/resnet18
https://huggingface.co/Francesco/resnet18
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet18 ### Model URL : https://huggingface.co/Francesco/resnet18 ### Model Description : No model card New: Create and edit this model card directly on the website!
Francesco/resnet26-224-1k
https://huggingface.co/Francesco/resnet26-224-1k
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet26-224-1k ### Model URL : https://huggingface.co/Francesco/resnet26-224-1k ### Model Description : No model card New: Create and edit this model card directly on the website!
Francesco/resnet26
https://huggingface.co/Francesco/resnet26
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet26 ### Model URL : https://huggingface.co/Francesco/resnet26 ### Model Description : No model card New: Create and edit this model card directly on the website!
Francesco/resnet34-224-1k
https://huggingface.co/Francesco/resnet34-224-1k
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet34-224-1k ### Model URL : https://huggingface.co/Francesco/resnet34-224-1k ### Model Description : No model card New: Create and edit this model card directly on the website!
Francesco/resnet34
https://huggingface.co/Francesco/resnet34
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet34 ### Model URL : https://huggingface.co/Francesco/resnet34 ### Model Description : No model card New: Create and edit this model card directly on the website!
Francesco/resnet50-224-1k
https://huggingface.co/Francesco/resnet50-224-1k
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet50-224-1k ### Model URL : https://huggingface.co/Francesco/resnet50-224-1k ### Model Description : No model card New: Create and edit this model card directly on the website!
Francesco/resnet50
https://huggingface.co/Francesco/resnet50
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Francesco/resnet50 ### Model URL : https://huggingface.co/Francesco/resnet50 ### Model Description : No model card New: Create and edit this model card directly on the website!
Frankie2666/twitter-roberta-base-sentiment
https://huggingface.co/Frankie2666/twitter-roberta-base-sentiment
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Frankie2666/twitter-roberta-base-sentiment ### Model URL : https://huggingface.co/Frankie2666/twitter-roberta-base-sentiment ### Model Description : No model card New: Create and edit this model card directly on the website!
FranzStrauss/ponet-base-uncased
https://huggingface.co/FranzStrauss/ponet-base-uncased
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FranzStrauss/ponet-base-uncased ### Model URL : https://huggingface.co/FranzStrauss/ponet-base-uncased ### Model Description : No model card New: Create and edit this model card directly on the website!
Fraser/ag_news
https://huggingface.co/Fraser/ag_news
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fraser/ag_news ### Model URL : https://huggingface.co/Fraser/ag_news ### Model Description : No model card New: Create and edit this model card directly on the website!
Fraser/single_latent
https://huggingface.co/Fraser/single_latent
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fraser/single_latent ### Model URL : https://huggingface.co/Fraser/single_latent ### Model Description : No model card New: Create and edit this model card directly on the website!
Fraser/to_delete
https://huggingface.co/Fraser/to_delete
Generated program synthesis datasets used to train dreamcoder. Currently just supports text & list data.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fraser/to_delete ### Model URL : https://huggingface.co/Fraser/to_delete ### Model Description : Generated program synthesis datasets used to train dreamcoder. Currently just supports text & list data.
Fraser/transformer-vae
https://huggingface.co/Fraser/transformer-vae
A PyTorch Transformer-VAE model. Uses an MMD loss to prevent posterior collapse. Will setup in the next month or so.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fraser/transformer-vae ### Model URL : https://huggingface.co/Fraser/transformer-vae ### Model Description : A PyTorch Transformer-VAE model. Uses an MMD loss to prevent posterior collapse. Will setup in the next month or so.
Fraser/wiki-vae
https://huggingface.co/Fraser/wiki-vae
A Transformer-VAE trained on all the sentences in wikipedia. Training is done on AWS SageMaker.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fraser/wiki-vae ### Model URL : https://huggingface.co/Fraser/wiki-vae ### Model Description : A Transformer-VAE trained on all the sentences in wikipedia. Training is done on AWS SageMaker.
FreakyNobleGas/DialoGPT-medium-HarryPotter
https://huggingface.co/FreakyNobleGas/DialoGPT-medium-HarryPotter
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FreakyNobleGas/DialoGPT-medium-HarryPotter ### Model URL : https://huggingface.co/FreakyNobleGas/DialoGPT-medium-HarryPotter ### Model Description : No model card New: Create and edit this model card directly on the website!
Fred/Cows
https://huggingface.co/Fred/Cows
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fred/Cows ### Model URL : https://huggingface.co/Fred/Cows ### Model Description : No model card New: Create and edit this model card directly on the website!
Frederick0291/t5-small-finetuned-billsum
https://huggingface.co/Frederick0291/t5-small-finetuned-billsum
This model is a fine-tuned version of t5-small on the billsum dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Frederick0291/t5-small-finetuned-billsum ### Model URL : https://huggingface.co/Frederick0291/t5-small-finetuned-billsum ### Model Description : This model is a fine-tuned version of t5-small on the billsum dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Frederick0291/t5-small-finetuned-xsum-finetuned-billsum
https://huggingface.co/Frederick0291/t5-small-finetuned-xsum-finetuned-billsum
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Frederick0291/t5-small-finetuned-xsum-finetuned-billsum ### Model URL : https://huggingface.co/Frederick0291/t5-small-finetuned-xsum-finetuned-billsum ### Model Description : No model card New: Create and edit this model card directly on the website!
Frederick0291/t5-small-finetuned-xsum
https://huggingface.co/Frederick0291/t5-small-finetuned-xsum
This model is a fine-tuned version of Frederick0291/t5-small-finetuned-xsum on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid: Schema validation error. "model-index[0].results[0].metrics" is required
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Frederick0291/t5-small-finetuned-xsum ### Model URL : https://huggingface.co/Frederick0291/t5-small-finetuned-xsum ### Model Description : This model is a fine-tuned version of Frederick0291/t5-small-finetuned-xsum on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid: Schema validation error. "model-index[0].results[0].metrics" is required
Fredy/wav2vec2-base-nonnative-vietnamese-test-colab
https://huggingface.co/Fredy/wav2vec2-base-nonnative-vietnamese-test-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fredy/wav2vec2-base-nonnative-vietnamese-test-colab ### Model URL : https://huggingface.co/Fredy/wav2vec2-base-nonnative-vietnamese-test-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
FreeSpinsCoinMaster/dsdqfdqsfsf
https://huggingface.co/FreeSpinsCoinMaster/dsdqfdqsfsf
https://elinsborgsskolan.stockholm.se/sites/default/files/webform/ro-bux_nc-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-onlyfans-hack-2021_oq-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-v-bucks-g1_zo-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-tiktok-fans-generator_sg-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/spins.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/pubg.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/google.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/7frtg.pdf
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FreeSpinsCoinMaster/dsdqfdqsfsf ### Model URL : https://huggingface.co/FreeSpinsCoinMaster/dsdqfdqsfsf ### Model Description : https://elinsborgsskolan.stockholm.se/sites/default/files/webform/ro-bux_nc-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-onlyfans-hack-2021_oq-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-v-bucks-g1_zo-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/free-tiktok-fans-generator_sg-21.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/spins.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/pubg.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/google.pdf https://elinsborgsskolan.stockholm.se/sites/default/files/webform/7frtg.pdf
FremyCompany/xls-r-2b-nl-v2_lm-5gram-os
https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os
This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. To improve accuracy, a beam-search decoder based on pyctcdecode is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus. This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). The model was:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FremyCompany/xls-r-2b-nl-v2_lm-5gram-os ### Model URL : https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os ### Model Description : This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. To improve accuracy, a beam-search decoder based on pyctcdecode is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus. This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). The model was:
FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell
https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell
This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): IMPORTANT NOTE: The hunspell typo fixer is not enabled on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the eval.py decoding script. For best results, please use the code in that file while using the model locally for inference. IMPORTANT NOTE: Evaluating this model requires apt install libhunspell-dev and a pip install of hunspell in addition to pip installs of pipy-kenlm and pyctcdecode (see install_requirements.sh); in addition, the chunking lengths and strides were optimized for the model as 12s and 2s respectively (see eval.sh). QUICK REMARK: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance 2014 in the dev set is left as a number but will be recognized as tweeduizend veertien, which counts as 3 mistakes (2014 missing, and both tweeduizend and veertien wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript (ja, etc...). As a result, our real error rate on the dev set is significantly lower than reported. You can compare the predictions with the targets on the validation dev set yourself, for example using this diffing tool. WE DO SPEECH RECOGNITION: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to contact our team. This model was developped during the Robust Speech Recognition challenge event by François REMY (twitter) and Geoffroy VANDERREYDT. We would like to thank OVH for providing us with a V100S GPU. The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. To improve accuracy, a beam-search decoder based on pyctcdecode is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus. To further deal with typos, hunspell is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct collegas into collega's or gogol into google. This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). The model was:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell ### Model URL : https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell ### Model Description : This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the CGN dataset, as well as the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): IMPORTANT NOTE: The hunspell typo fixer is not enabled on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the eval.py decoding script. For best results, please use the code in that file while using the model locally for inference. IMPORTANT NOTE: Evaluating this model requires apt install libhunspell-dev and a pip install of hunspell in addition to pip installs of pipy-kenlm and pyctcdecode (see install_requirements.sh); in addition, the chunking lengths and strides were optimized for the model as 12s and 2s respectively (see eval.sh). QUICK REMARK: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance 2014 in the dev set is left as a number but will be recognized as tweeduizend veertien, which counts as 3 mistakes (2014 missing, and both tweeduizend and veertien wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript (ja, etc...). As a result, our real error rate on the dev set is significantly lower than reported. You can compare the predictions with the targets on the validation dev set yourself, for example using this diffing tool. WE DO SPEECH RECOGNITION: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to contact our team. This model was developped during the Robust Speech Recognition challenge event by François REMY (twitter) and Geoffroy VANDERREYDT. We would like to thank OVH for providing us with a V100S GPU. The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame. To improve accuracy, a beam-search decoder based on pyctcdecode is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus. To further deal with typos, hunspell is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct collegas into collega's or gogol into google. This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation). The model was:
FremyCompany/xls-r-nl-v1-cv8-lm
https://huggingface.co/FremyCompany/xls-r-nl-v1-cv8-lm
This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result. To improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus. This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FremyCompany/xls-r-nl-v1-cv8-lm ### Model URL : https://huggingface.co/FremyCompany/xls-r-nl-v1-cv8-lm ### Model Description : This model is a version of facebook/wav2vec2-xls-r-2b-22-to-16 fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0): The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result. To improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus. This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
Freych/DialoGPT-small-xinyan
https://huggingface.co/Freych/DialoGPT-small-xinyan
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Freych/DialoGPT-small-xinyan ### Model URL : https://huggingface.co/Freych/DialoGPT-small-xinyan ### Model Description : No model card New: Create and edit this model card directly on the website!
Frikallo/BATbot
https://huggingface.co/Frikallo/BATbot
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Frikallo/BATbot ### Model URL : https://huggingface.co/Frikallo/BATbot ### Model Description : No model card New: Create and edit this model card directly on the website!
FrillyMilly/DialoGPT-small-rick
https://huggingface.co/FrillyMilly/DialoGPT-small-rick
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FrillyMilly/DialoGPT-small-rick ### Model URL : https://huggingface.co/FrillyMilly/DialoGPT-small-rick ### Model Description : No model card New: Create and edit this model card directly on the website!
Frodnar/bee-likes
https://huggingface.co/Frodnar/bee-likes
Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Frodnar/bee-likes ### Model URL : https://huggingface.co/Frodnar/bee-likes ### Model Description : Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo.
Froggie/just-testing
https://huggingface.co/Froggie/just-testing
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Froggie/just-testing ### Model URL : https://huggingface.co/Froggie/just-testing ### Model Description : No model card New: Create and edit this model card directly on the website!
Fu10k/DialoGPT-medium-Rick
https://huggingface.co/Fu10k/DialoGPT-medium-Rick
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fu10k/DialoGPT-medium-Rick ### Model URL : https://huggingface.co/Fu10k/DialoGPT-medium-Rick ### Model Description :
Fujitsu/AugCode
https://huggingface.co/Fujitsu/AugCode
This is Augmented Code Model which is a fined-tune model of CodeBERT for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4. Similar to other huggingface model, you may load the model as follows. Then you may use model to infer the similarity between a given docstring and code.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fujitsu/AugCode ### Model URL : https://huggingface.co/Fujitsu/AugCode ### Model Description : This is Augmented Code Model which is a fined-tune model of CodeBERT for processing of similarity between given docstring and code. This model is fined-model based on Augmented Code Corpus with ACS=4. Similar to other huggingface model, you may load the model as follows. Then you may use model to infer the similarity between a given docstring and code.
Fujitsu/pytorrent
https://huggingface.co/Fujitsu/pytorrent
Pretrained weights based on PyTorrent Dataset which is a curated data from a large official Python packages. We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources. This model is trained with a Masked Language Model (MLM) objective. Preprint: https://arxiv.org/pdf/2110.01710.pdf
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fujitsu/pytorrent ### Model URL : https://huggingface.co/Fujitsu/pytorrent ### Model Description : Pretrained weights based on PyTorrent Dataset which is a curated data from a large official Python packages. We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources. This model is trained with a Masked Language Model (MLM) objective. Preprint: https://arxiv.org/pdf/2110.01710.pdf
FuriouslyAsleep/markuplm-large-finetuned-qa
https://huggingface.co/FuriouslyAsleep/markuplm-large-finetuned-qa
This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft. Test the question answering out in the Markup QA space here --------------------------------------------------------------------------------- Fine-tuned Multimodal (text +markup language) pre-training for Document AI MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei --------------------------------------------------------------------------------- Fine-tuning args: --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4 The number of total websites is 60 The train websites list is ['ga09'] The test websites list is [] The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08'] The number of processed websites is 60 --------------------------------------------------------------------------------- Inference test here may not work. Use the transformers markuplm branch from NielsRogge transformers markuplm branch After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict) Go to https://github.com/uwts/ProjectRisk for sample script.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FuriouslyAsleep/markuplm-large-finetuned-qa ### Model URL : https://huggingface.co/FuriouslyAsleep/markuplm-large-finetuned-qa ### Model Description : This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft. Test the question answering out in the Markup QA space here --------------------------------------------------------------------------------- Fine-tuned Multimodal (text +markup language) pre-training for Document AI MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding Junlong Li, Yiheng Xu, Lei Cui, Furu Wei --------------------------------------------------------------------------------- Fine-tuning args: --per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4 The number of total websites is 60 The train websites list is ['ga09'] The test websites list is [] The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08'] The number of processed websites is 60 --------------------------------------------------------------------------------- Inference test here may not work. Use the transformers markuplm branch from NielsRogge transformers markuplm branch After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict) Go to https://github.com/uwts/ProjectRisk for sample script.
Furkan/Furkan
https://huggingface.co/Furkan/Furkan
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Furkan/Furkan ### Model URL : https://huggingface.co/Furkan/Furkan ### Model Description : No model card New: Create and edit this model card directly on the website!
FutureFanatik/DialoGPT-small-rick
https://huggingface.co/FutureFanatik/DialoGPT-small-rick
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : FutureFanatik/DialoGPT-small-rick ### Model URL : https://huggingface.co/FutureFanatik/DialoGPT-small-rick ### Model Description : No model card New: Create and edit this model card directly on the website!
GD/bert-base-uncased-gh
https://huggingface.co/GD/bert-base-uncased-gh
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GD/bert-base-uncased-gh ### Model URL : https://huggingface.co/GD/bert-base-uncased-gh ### Model Description : No model card New: Create and edit this model card directly on the website!
GD/cq-bert-model-repo
https://huggingface.co/GD/cq-bert-model-repo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GD/cq-bert-model-repo ### Model URL : https://huggingface.co/GD/cq-bert-model-repo ### Model Description : No model card New: Create and edit this model card directly on the website!
GD/qqp-bert-model-repo
https://huggingface.co/GD/qqp-bert-model-repo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GD/qqp-bert-model-repo ### Model URL : https://huggingface.co/GD/qqp-bert-model-repo ### Model Description : No model card New: Create and edit this model card directly on the website!
GD/qqp_1_glue_cased_
https://huggingface.co/GD/qqp_1_glue_cased_
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GD/qqp_1_glue_cased_ ### Model URL : https://huggingface.co/GD/qqp_1_glue_cased_ ### Model Description : No model card New: Create and edit this model card directly on the website!
GD/qqp_multi_glue_cased_finish_5_epochs_only_qqp_repo
https://huggingface.co/GD/qqp_multi_glue_cased_finish_5_epochs_only_qqp_repo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GD/qqp_multi_glue_cased_finish_5_epochs_only_qqp_repo ### Model URL : https://huggingface.co/GD/qqp_multi_glue_cased_finish_5_epochs_only_qqp_repo ### Model Description : No model card New: Create and edit this model card directly on the website!
GHP/Jocker
https://huggingface.co/GHP/Jocker
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GHP/Jocker ### Model URL : https://huggingface.co/GHP/Jocker ### Model Description : No model card New: Create and edit this model card directly on the website!
GHP/gpt2-fine-tuned
https://huggingface.co/GHP/gpt2-fine-tuned
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GHP/gpt2-fine-tuned ### Model URL : https://huggingface.co/GHP/gpt2-fine-tuned ### Model Description : No model card New: Create and edit this model card directly on the website!
GHonem/opus-mt-en-ROMANCE-finetuned-en-to-ro
https://huggingface.co/GHonem/opus-mt-en-ROMANCE-finetuned-en-to-ro
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GHonem/opus-mt-en-ROMANCE-finetuned-en-to-ro ### Model URL : https://huggingface.co/GHonem/opus-mt-en-ROMANCE-finetuned-en-to-ro ### Model Description : No model card New: Create and edit this model card directly on the website!
GHonem/opus-mt-en-ro-finetuned-en-to-ro
https://huggingface.co/GHonem/opus-mt-en-ro-finetuned-en-to-ro
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GHonem/opus-mt-en-ro-finetuned-en-to-ro ### Model URL : https://huggingface.co/GHonem/opus-mt-en-ro-finetuned-en-to-ro ### Model Description : No model card New: Create and edit this model card directly on the website!
GHonem/sentiment_analysis
https://huggingface.co/GHonem/sentiment_analysis
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GHonem/sentiment_analysis ### Model URL : https://huggingface.co/GHonem/sentiment_analysis ### Model Description : No model card New: Create and edit this model card directly on the website!
GHonem/t5-small-finetuned-ar-to-ar
https://huggingface.co/GHonem/t5-small-finetuned-ar-to-ar
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GHonem/t5-small-finetuned-ar-to-ar ### Model URL : https://huggingface.co/GHonem/t5-small-finetuned-ar-to-ar ### Model Description : No model card New: Create and edit this model card directly on the website!
GIO97/GIO
https://huggingface.co/GIO97/GIO
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GIO97/GIO ### Model URL : https://huggingface.co/GIO97/GIO ### Model Description : No model card New: Create and edit this model card directly on the website!
GKLMIP/bert-khmer-base-uncased-tokenized
https://huggingface.co/GKLMIP/bert-khmer-base-uncased-tokenized
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-khmer-base-uncased-tokenized ### Model URL : https://huggingface.co/GKLMIP/bert-khmer-base-uncased-tokenized ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/bert-khmer-base-uncased
https://huggingface.co/GKLMIP/bert-khmer-base-uncased
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-khmer-base-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-khmer-base-uncased ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/bert-khmer-small-uncased-tokenized
https://huggingface.co/GKLMIP/bert-khmer-small-uncased-tokenized
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-khmer-small-uncased-tokenized ### Model URL : https://huggingface.co/GKLMIP/bert-khmer-small-uncased-tokenized ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/bert-khmer-small-uncased
https://huggingface.co/GKLMIP/bert-khmer-small-uncased
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-khmer-small-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-khmer-small-uncased ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/bert-laos-base-uncased
https://huggingface.co/GKLMIP/bert-laos-base-uncased
The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-laos-base-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-laos-base-uncased ### Model Description : The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
GKLMIP/bert-laos-small-uncased
https://huggingface.co/GKLMIP/bert-laos-small-uncased
The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-laos-small-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-laos-small-uncased ### Model Description : The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
GKLMIP/bert-myanmar-base-uncased
https://huggingface.co/GKLMIP/bert-myanmar-base-uncased
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-myanmar-base-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-myanmar-base-uncased ### Model Description : The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
GKLMIP/bert-myanmar-small-uncased
https://huggingface.co/GKLMIP/bert-myanmar-small-uncased
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-myanmar-small-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-myanmar-small-uncased ### Model Description : The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
GKLMIP/bert-tagalog-base-uncased
https://huggingface.co/GKLMIP/bert-tagalog-base-uncased
https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/bert-tagalog-base-uncased ### Model URL : https://huggingface.co/GKLMIP/bert-tagalog-base-uncased ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper:
GKLMIP/electra-khmer-base-uncased-tokenized
https://huggingface.co/GKLMIP/electra-khmer-base-uncased-tokenized
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-khmer-base-uncased-tokenized ### Model URL : https://huggingface.co/GKLMIP/electra-khmer-base-uncased-tokenized ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/electra-khmer-base-uncased
https://huggingface.co/GKLMIP/electra-khmer-base-uncased
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-khmer-base-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-khmer-base-uncased ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/electra-khmer-small-uncased-tokenized
https://huggingface.co/GKLMIP/electra-khmer-small-uncased-tokenized
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-khmer-small-uncased-tokenized ### Model URL : https://huggingface.co/GKLMIP/electra-khmer-small-uncased-tokenized ### Model Description : No model card New: Create and edit this model card directly on the website!
GKLMIP/electra-khmer-small-uncased
https://huggingface.co/GKLMIP/electra-khmer-small-uncased
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-khmer-small-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-khmer-small-uncased ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper:
GKLMIP/electra-laos-base-uncased
https://huggingface.co/GKLMIP/electra-laos-base-uncased
The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-laos-base-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-laos-base-uncased ### Model Description : The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
GKLMIP/electra-laos-small-uncased
https://huggingface.co/GKLMIP/electra-laos-small-uncased
The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-laos-small-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-laos-small-uncased ### Model Description : The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
GKLMIP/electra-myanmar-base-uncased
https://huggingface.co/GKLMIP/electra-myanmar-base-uncased
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-myanmar-base-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-myanmar-base-uncased ### Model Description : The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
GKLMIP/electra-myanmar-small-uncased
https://huggingface.co/GKLMIP/electra-myanmar-small-uncased
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-myanmar-small-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-myanmar-small-uncased ### Model Description : The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper:
GKLMIP/electra-tagalog-base-uncased
https://huggingface.co/GKLMIP/electra-tagalog-base-uncased
https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/electra-tagalog-base-uncased ### Model URL : https://huggingface.co/GKLMIP/electra-tagalog-base-uncased ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper:
GKLMIP/roberta-hindi-romanized
https://huggingface.co/GKLMIP/roberta-hindi-romanized
If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/roberta-hindi-romanized ### Model URL : https://huggingface.co/GKLMIP/roberta-hindi-romanized ### Model Description : If you use our model, please consider citing our paper:
GKLMIP/roberta-hindi-devanagari
https://huggingface.co/GKLMIP/roberta-hindi-devanagari
If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/roberta-hindi-devanagari ### Model URL : https://huggingface.co/GKLMIP/roberta-hindi-devanagari ### Model Description : If you use our model, please consider citing our paper:
GKLMIP/roberta-tagalog-base
https://huggingface.co/GKLMIP/roberta-tagalog-base
https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GKLMIP/roberta-tagalog-base ### Model URL : https://huggingface.co/GKLMIP/roberta-tagalog-base ### Model Description : https://github.com/GKLMIP/Pretrained-Models-For-Tagalog If you use our model, please consider citing our paper:
GPL/README
https://huggingface.co/GPL/README
Naming pattern: Actually, models in 1. and 2. are built on top of 3. and 4., respectively.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/README ### Model URL : https://huggingface.co/GPL/README ### Model Description : Naming pattern: Actually, models in 1. and 2. are built on top of 3. and 4., respectively.
GPL/bioasq-1m-msmarco-distilbert-gpl
https://huggingface.co/GPL/bioasq-1m-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/bioasq-1m-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/bioasq-1m-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl
https://huggingface.co/GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/bioasq-1m-tsdae-msmarco-distilbert-margin-mse
https://huggingface.co/GPL/bioasq-1m-tsdae-msmarco-distilbert-margin-mse
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/bioasq-1m-tsdae-msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/bioasq-1m-tsdae-msmarco-distilbert-margin-mse ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/cqadupstack-msmarco-distilbert-gpl
https://huggingface.co/GPL/cqadupstack-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/cqadupstack-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/cqadupstack-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/cqadupstack-tsdae-msmarco-distilbert-gpl
https://huggingface.co/GPL/cqadupstack-tsdae-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/cqadupstack-tsdae-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/cqadupstack-tsdae-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/cqadupstack-tsdae-msmarco-distilbert-margin-mse
https://huggingface.co/GPL/cqadupstack-tsdae-msmarco-distilbert-margin-mse
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/cqadupstack-tsdae-msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/cqadupstack-tsdae-msmarco-distilbert-margin-mse ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/fiqa-msmarco-distilbert-gpl
https://huggingface.co/GPL/fiqa-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/fiqa-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/fiqa-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/fiqa-tsdae-msmarco-distilbert-gpl
https://huggingface.co/GPL/fiqa-tsdae-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/fiqa-tsdae-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/fiqa-tsdae-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/fiqa-tsdae-msmarco-distilbert-margin-mse
https://huggingface.co/GPL/fiqa-tsdae-msmarco-distilbert-margin-mse
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/fiqa-tsdae-msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/fiqa-tsdae-msmarco-distilbert-margin-mse ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/msmarco-distilbert-margin-mse
https://huggingface.co/GPL/msmarco-distilbert-margin-mse
This is the zero-shot baseline model in the paper "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" The training setup:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/msmarco-distilbert-margin-mse ### Model Description : This is the zero-shot baseline model in the paper "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" The training setup:
GPL/robust04-msmarco-distilbert-gpl
https://huggingface.co/GPL/robust04-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/robust04-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/robust04-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/robust04-tsdae-msmarco-distilbert-gpl
https://huggingface.co/GPL/robust04-tsdae-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/robust04-tsdae-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/robust04-tsdae-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/robust04-tsdae-msmarco-distilbert-margin-mse
https://huggingface.co/GPL/robust04-tsdae-msmarco-distilbert-margin-mse
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/robust04-tsdae-msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/robust04-tsdae-msmarco-distilbert-margin-mse ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/scifact-msmarco-distilbert-gpl
https://huggingface.co/GPL/scifact-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/scifact-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/scifact-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/scifact-tsdae-msmarco-distilbert-gpl
https://huggingface.co/GPL/scifact-tsdae-msmarco-distilbert-gpl
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/scifact-tsdae-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/scifact-tsdae-msmarco-distilbert-gpl ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/scifact-tsdae-msmarco-distilbert-margin-mse
https://huggingface.co/GPL/scifact-tsdae-msmarco-distilbert-margin-mse
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/scifact-tsdae-msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/scifact-tsdae-msmarco-distilbert-margin-mse ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/trec-covid-v2-msmarco-distilbert-gpl
https://huggingface.co/GPL/trec-covid-v2-msmarco-distilbert-gpl
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/trec-covid-v2-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/trec-covid-v2-msmarco-distilbert-gpl ### Model Description : This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net The model was trained with the parameters: DataLoader: torch.utils.data.dataloader.DataLoader of length 140000 with parameters: Loss: gpl.toolkit.loss.MarginDistillationLoss Parameters of the fit()-Method:
GPL/trec-covid-v2-tsdae-msmarco-distilbert-gpl
https://huggingface.co/GPL/trec-covid-v2-tsdae-msmarco-distilbert-gpl
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/trec-covid-v2-tsdae-msmarco-distilbert-gpl ### Model URL : https://huggingface.co/GPL/trec-covid-v2-tsdae-msmarco-distilbert-gpl ### Model Description : No model card New: Create and edit this model card directly on the website!
GPL/trec-covid-v2-tsdae-msmarco-distilbert-margin-mse
https://huggingface.co/GPL/trec-covid-v2-tsdae-msmarco-distilbert-margin-mse
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GPL/trec-covid-v2-tsdae-msmarco-distilbert-margin-mse ### Model URL : https://huggingface.co/GPL/trec-covid-v2-tsdae-msmarco-distilbert-margin-mse ### Model Description : No model card New: Create and edit this model card directly on the website!
GV05/wav2vec2-large-xls-r-300m-he-colab
https://huggingface.co/GV05/wav2vec2-large-xls-r-300m-he-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GV05/wav2vec2-large-xls-r-300m-he-colab ### Model URL : https://huggingface.co/GV05/wav2vec2-large-xls-r-300m-he-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
GabbyDaBUNBUN/DialoGPT-medium-PinkiePie
https://huggingface.co/GabbyDaBUNBUN/DialoGPT-medium-PinkiePie
used from r3dhummingbird!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GabbyDaBUNBUN/DialoGPT-medium-PinkiePie ### Model URL : https://huggingface.co/GabbyDaBUNBUN/DialoGPT-medium-PinkiePie ### Model Description : used from r3dhummingbird!
GabbyDaBUNBUN/HuggingFace-API-key.txt
https://huggingface.co/GabbyDaBUNBUN/HuggingFace-API-key.txt
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GabbyDaBUNBUN/HuggingFace-API-key.txt ### Model URL : https://huggingface.co/GabbyDaBUNBUN/HuggingFace-API-key.txt ### Model Description : No model card New: Create and edit this model card directly on the website!
Gabs35/gabs
https://huggingface.co/Gabs35/gabs
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Gabs35/gabs ### Model URL : https://huggingface.co/Gabs35/gabs ### Model Description : No model card New: Create and edit this model card directly on the website!
Gabsz/Gabi
https://huggingface.co/Gabsz/Gabi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Gabsz/Gabi ### Model URL : https://huggingface.co/Gabsz/Gabi ### Model Description : No model card New: Create and edit this model card directly on the website!
Galaxy/DialoGPT-small-hermoine
https://huggingface.co/Galaxy/DialoGPT-small-hermoine
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Galaxy/DialoGPT-small-hermoine ### Model URL : https://huggingface.co/Galaxy/DialoGPT-small-hermoine ### Model Description :
Galuh/clip-indonesian
https://huggingface.co/Galuh/clip-indonesian
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Galuh/clip-indonesian ### Model URL : https://huggingface.co/Galuh/clip-indonesian ### Model Description : No model card New: Create and edit this model card directly on the website!
Galuh/id-journal-gpt2
https://huggingface.co/Galuh/id-journal-gpt2
This is the Indonesian gpt2-small model fine-tuned to abstracts of Indonesian academic journals. All training was done on a TPUv2-8 VM sponsored by TPU Research Cloud. The demo can be found here. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: This model is originally the Indonesian gpt2-small model, thus this model is also subject to the same limitations and bias as the original model. More detailed bias and analysis on this specific model is coming soon. The model was trained on a dataset of Indonesian journals. We only trained this model on the abstracts. We extract the abstract by writing a script to find any text that is located between the word "Abstrak" (abstract) and "Kata kunci" (keywords). The extraction script can be found here. To separate each abstract, we also add an end of text token (<|endoftext|>) between each abstract. The information of the sub-dataset and the distribution of the training and evaluation dataset are as follows: The model was trained on a TPUv2-8 VM provided by TPU Research Cloud. The training duration was 2h 30m 57s. The model achieves the following results without any fine-tuning (zero-shot): The training process was tracked in TensorBoard.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Galuh/id-journal-gpt2 ### Model URL : https://huggingface.co/Galuh/id-journal-gpt2 ### Model Description : This is the Indonesian gpt2-small model fine-tuned to abstracts of Indonesian academic journals. All training was done on a TPUv2-8 VM sponsored by TPU Research Cloud. The demo can be found here. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: This model is originally the Indonesian gpt2-small model, thus this model is also subject to the same limitations and bias as the original model. More detailed bias and analysis on this specific model is coming soon. The model was trained on a dataset of Indonesian journals. We only trained this model on the abstracts. We extract the abstract by writing a script to find any text that is located between the word "Abstrak" (abstract) and "Kata kunci" (keywords). The extraction script can be found here. To separate each abstract, we also add an end of text token (<|endoftext|>) between each abstract. The information of the sub-dataset and the distribution of the training and evaluation dataset are as follows: The model was trained on a TPUv2-8 VM provided by TPU Research Cloud. The training duration was 2h 30m 57s. The model achieves the following results without any fine-tuning (zero-shot): The training process was tracked in TensorBoard.
Galuh/wav2vec2-large-xlsr-indonesian
https://huggingface.co/Galuh/wav2vec2-large-xlsr-indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned facebook/wav2vec2-large-xlsr-53 model on the Indonesian Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: The model can be evaluated as follows on the Indonesian test data of Common Voice. Test Result: 18.32 % The Common Voice train, validation, and ... datasets were used for training as well as ... and ... # TODO The script used for training can be found here (will be available soon)
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Galuh/wav2vec2-large-xlsr-indonesian ### Model URL : https://huggingface.co/Galuh/wav2vec2-large-xlsr-indonesian ### Model Description : This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned facebook/wav2vec2-large-xlsr-53 model on the Indonesian Common Voice dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: The model can be evaluated as follows on the Indonesian test data of Common Voice. Test Result: 18.32 % The Common Voice train, validation, and ... datasets were used for training as well as ... and ... # TODO The script used for training can be found here (will be available soon)
Galuh/xlsr-indonesian
https://huggingface.co/Galuh/xlsr-indonesian
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Galuh/xlsr-indonesian ### Model URL : https://huggingface.co/Galuh/xlsr-indonesian ### Model Description : No model card New: Create and edit this model card directly on the website!
GamerMan02/DialoGPT-medium-gamerbot
https://huggingface.co/GamerMan02/DialoGPT-medium-gamerbot
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GamerMan02/DialoGPT-medium-gamerbot ### Model URL : https://huggingface.co/GamerMan02/DialoGPT-medium-gamerbot ### Model Description :
GammaPTest/e_bot
https://huggingface.co/GammaPTest/e_bot
This be a test
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GammaPTest/e_bot ### Model URL : https://huggingface.co/GammaPTest/e_bot ### Model Description : This be a test