Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
Einmalumdiewelt/T5-Base_GNAD
https://huggingface.co/Einmalumdiewelt/T5-Base_GNAD
This model is a fine-tuned version of Einmalumdiewelt/T5-Base_GNAD on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Einmalumdiewelt/T5-Base_GNAD ### Model URL : https://huggingface.co/Einmalumdiewelt/T5-Base_GNAD ### Model Description : This model is a fine-tuned version of Einmalumdiewelt/T5-Base_GNAD on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Eirca/add_vocab_fin
https://huggingface.co/Eirca/add_vocab_fin
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eirca/add_vocab_fin ### Model URL : https://huggingface.co/Eirca/add_vocab_fin ### Model Description : No model card New: Create and edit this model card directly on the website!
Eirca/vocab_add_fin
https://huggingface.co/Eirca/vocab_add_fin
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eirca/vocab_add_fin ### Model URL : https://huggingface.co/Eirca/vocab_add_fin ### Model Description : No model card New: Create and edit this model card directly on the website!
Eissugen/Eissugen
https://huggingface.co/Eissugen/Eissugen
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eissugen/Eissugen ### Model URL : https://huggingface.co/Eissugen/Eissugen ### Model Description : No model card New: Create and edit this model card directly on the website!
Ekael/distilbert-base-uncased-finetuned-squad
https://huggingface.co/Ekael/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ekael/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/Ekael/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
Ekta/Hark2
https://huggingface.co/Ekta/Hark2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ekta/Hark2 ### Model URL : https://huggingface.co/Ekta/Hark2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Ekta/Hark3
https://huggingface.co/Ekta/Hark3
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ekta/Hark3 ### Model URL : https://huggingface.co/Ekta/Hark3 ### Model Description : No model card New: Create and edit this model card directly on the website!
Ekta/Hark4
https://huggingface.co/Ekta/Hark4
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ekta/Hark4 ### Model URL : https://huggingface.co/Ekta/Hark4 ### Model Description : No model card New: Create and edit this model card directly on the website!
Ekta/dummy-model
https://huggingface.co/Ekta/dummy-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ekta/dummy-model ### Model URL : https://huggingface.co/Ekta/dummy-model ### Model Description : No model card New: Create and edit this model card directly on the website!
Ekta/your-model-name
https://huggingface.co/Ekta/your-model-name
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ekta/your-model-name ### Model URL : https://huggingface.co/Ekta/your-model-name ### Model Description : No model card New: Create and edit this model card directly on the website!
Elaben/wav2vec2-base-timit-demo-colab
https://huggingface.co/Elaben/wav2vec2-base-timit-demo-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elaben/wav2vec2-base-timit-demo-colab ### Model URL : https://huggingface.co/Elaben/wav2vec2-base-timit-demo-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
Elaben/wav2vec2-base-timit-demo-ipython
https://huggingface.co/Elaben/wav2vec2-base-timit-demo-ipython
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elaben/wav2vec2-base-timit-demo-ipython ### Model URL : https://huggingface.co/Elaben/wav2vec2-base-timit-demo-ipython ### Model Description : No model card New: Create and edit this model card directly on the website!
Elainecc/testcc
https://huggingface.co/Elainecc/testcc
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elainecc/testcc ### Model URL : https://huggingface.co/Elainecc/testcc ### Model Description : No model card New: Create and edit this model card directly on the website!
Elainelau9913/distilbert-base-uncased-finetuned-squad
https://huggingface.co/Elainelau9913/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elainelau9913/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/Elainelau9913/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
Elbe/RoBERTaforIns
https://huggingface.co/Elbe/RoBERTaforIns
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elbe/RoBERTaforIns ### Model URL : https://huggingface.co/Elbe/RoBERTaforIns ### Model Description : No model card New: Create and edit this model card directly on the website!
Elbe/RoBERTaforIns_2
https://huggingface.co/Elbe/RoBERTaforIns_2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elbe/RoBERTaforIns_2 ### Model URL : https://huggingface.co/Elbe/RoBERTaforIns_2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Elbe/RoBERTaforIns_full
https://huggingface.co/Elbe/RoBERTaforIns_full
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elbe/RoBERTaforIns_full ### Model URL : https://huggingface.co/Elbe/RoBERTaforIns_full ### Model Description : No model card New: Create and edit this model card directly on the website!
EleutherAI/enformer-191k
https://huggingface.co/EleutherAI/enformer-191k
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/enformer-191k ### Model URL : https://huggingface.co/EleutherAI/enformer-191k ### Model Description : Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.45. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
EleutherAI/enformer-191k_corr_coef_obj
https://huggingface.co/EleutherAI/enformer-191k_corr_coef_obj
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/enformer-191k_corr_coef_obj ### Model URL : https://huggingface.co/EleutherAI/enformer-191k_corr_coef_obj ### Model Description : Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
EleutherAI/enformer-corr_coef_obj
https://huggingface.co/EleutherAI/enformer-corr_coef_obj
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/enformer-corr_coef_obj ### Model URL : https://huggingface.co/EleutherAI/enformer-corr_coef_obj ### Model Description : Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
EleutherAI/enformer-preview
https://huggingface.co/EleutherAI/enformer-preview
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/enformer-preview ### Model URL : https://huggingface.co/EleutherAI/enformer-preview ### Model Description : Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss. This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the enformer-pytorch repository. Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
EleutherAI/gpt-j-6b
https://huggingface.co/EleutherAI/gpt-j-6b
GPT-J 6B is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. * Each layer consists of one feedforward block and one self attention block. † Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. GPT-J-6B is not intended for deployment without fine-tuning, supervision, and/or moderation. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. GPT-J-6B was trained on an English-language only dataset, and is thus not suitable for translation or generating text in other languages. GPT-J-6B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means GPT-J-6B will not respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This model can be easily loaded using the AutoModelForCausalLM functionality: GPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI. This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Models roughly sorted by performance, or by FLOPs if not available. * Evaluation numbers reported by their respective authors. All other numbers are provided by running lm-evaluation-harness either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See this blog post for more details. † Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see 1 2 3) Thus, evaluation was not attempted. ‡ These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets. To cite this model: To cite the codebase that trained this model: If you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email. This project would not have been possible without compute generously provided by Google through the TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically):
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/gpt-j-6b ### Model URL : https://huggingface.co/EleutherAI/gpt-j-6b ### Model Description : GPT-J 6B is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. * Each layer consists of one feedforward block and one self attention block. † Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. GPT-J-6B is not intended for deployment without fine-tuning, supervision, and/or moderation. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. GPT-J-6B was trained on an English-language only dataset, and is thus not suitable for translation or generating text in other languages. GPT-J-6B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means GPT-J-6B will not respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This model can be easily loaded using the AutoModelForCausalLM functionality: GPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI. This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Models roughly sorted by performance, or by FLOPs if not available. * Evaluation numbers reported by their respective authors. All other numbers are provided by running lm-evaluation-harness either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See this blog post for more details. † Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see 1 2 3) Thus, evaluation was not attempted. ‡ These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets. To cite this model: To cite the codebase that trained this model: If you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email. This project would not have been possible without compute generously provided by Google through the TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically):
EleutherAI/gpt-neo-1.3B
https://huggingface.co/EleutherAI/gpt-neo-1.3B
GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model. GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. TBD To cite this model, please use Detailed results can be found here
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/gpt-neo-1.3B ### Model URL : https://huggingface.co/EleutherAI/gpt-neo-1.3B ### Model Description : GPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model. GPT-Neo 1.3B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. This model was trained on the Pile for 380 billion tokens over 362,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. TBD To cite this model, please use Detailed results can be found here
EleutherAI/gpt-neo-125m
https://huggingface.co/EleutherAI/gpt-neo-125m
GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. TBD TBD To cite this model, use Detailed results can be found here
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/gpt-neo-125m ### Model URL : https://huggingface.co/EleutherAI/gpt-neo-125m ### Model Description : GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. TBD TBD To cite this model, use Detailed results can be found here
EleutherAI/gpt-neo-2.7B
https://huggingface.co/EleutherAI/gpt-neo-2.7B
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. All evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord. TBD To cite this model, use
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EleutherAI/gpt-neo-2.7B ### Model URL : https://huggingface.co/EleutherAI/gpt-neo-2.7B ### Model Description : GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. All evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord. TBD To cite this model, use
Elliejone/Ellie
https://huggingface.co/Elliejone/Ellie
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elliejone/Ellie ### Model URL : https://huggingface.co/Elliejone/Ellie ### Model Description : No model card New: Create and edit this model card directly on the website!
Elluran/Hate_speech_detector
https://huggingface.co/Elluran/Hate_speech_detector
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elluran/Hate_speech_detector ### Model URL : https://huggingface.co/Elluran/Hate_speech_detector ### Model Description : No model card New: Create and edit this model card directly on the website!
ElnazDi/xlm-roberta-base-finetuned-marc
https://huggingface.co/ElnazDi/xlm-roberta-base-finetuned-marc
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ElnazDi/xlm-roberta-base-finetuned-marc ### Model URL : https://huggingface.co/ElnazDi/xlm-roberta-base-finetuned-marc ### Model Description : No model card New: Create and edit this model card directly on the website!
Elron/BLEURT-20
https://huggingface.co/Elron/BLEURT-20
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/BLEURT-20 ### Model URL : https://huggingface.co/Elron/BLEURT-20 ### Model Description : No model card New: Create and edit this model card directly on the website!
Elron/bleurt-base-128
https://huggingface.co/Elron/bleurt-base-128
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/bleurt-base-128 ### Model URL : https://huggingface.co/Elron/bleurt-base-128 ### Model Description : \n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Elron/bleurt-base-512
https://huggingface.co/Elron/bleurt-base-512
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/bleurt-base-512 ### Model URL : https://huggingface.co/Elron/bleurt-base-512 ### Model Description : \n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Elron/bleurt-large-128
https://huggingface.co/Elron/bleurt-large-128
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/bleurt-large-128 ### Model URL : https://huggingface.co/Elron/bleurt-large-128 ### Model Description : \n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Elron/bleurt-large-512
https://huggingface.co/Elron/bleurt-large-512
Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/bleurt-large-512 ### Model URL : https://huggingface.co/Elron/bleurt-large-512 ### Model Description : Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Elron/bleurt-tiny-128
https://huggingface.co/Elron/bleurt-tiny-128
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/bleurt-tiny-128 ### Model URL : https://huggingface.co/Elron/bleurt-tiny-128 ### Model Description : \n## BLEURT Pytorch version of the original BLEURT models from ACL paper "BLEURT: Learning Robust Metrics for Text Generation" by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from this notebook mentioned here.
Elron/bleurt-tiny-512
https://huggingface.co/Elron/bleurt-tiny-512
Pytorch version of the original BLEURT models from ACL paper This model can be used for the task of Text Classification More information needed. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. The model authors note in the associated paper: We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. More information needed More information needed The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier, More information needed More information needed More information needed More information needed Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). More information needed More information needed More information needed More information needed. BibTeX: More information needed More information needed Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team More information needed Use the code below to get started with the model. See this notebook for model conversion code.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elron/bleurt-tiny-512 ### Model URL : https://huggingface.co/Elron/bleurt-tiny-512 ### Model Description : Pytorch version of the original BLEURT models from ACL paper This model can be used for the task of Text Classification More information needed. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. The model authors note in the associated paper: We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. More information needed More information needed The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier, More information needed More information needed More information needed More information needed Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). More information needed More information needed More information needed More information needed. BibTeX: More information needed More information needed Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team More information needed Use the code below to get started with the model. See this notebook for model conversion code.
Elzen7/DialoGPT-medium-harrypotter
https://huggingface.co/Elzen7/DialoGPT-medium-harrypotter
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Elzen7/DialoGPT-medium-harrypotter ### Model URL : https://huggingface.co/Elzen7/DialoGPT-medium-harrypotter ### Model Description :
Emanuel/autonlp-pos-tag-bosque
https://huggingface.co/Emanuel/autonlp-pos-tag-bosque
You can use cURL to access this model: Or Python API:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emanuel/autonlp-pos-tag-bosque ### Model URL : https://huggingface.co/Emanuel/autonlp-pos-tag-bosque ### Model Description : You can use cURL to access this model: Or Python API:
Emanuel/bertweet-emotion-base
https://huggingface.co/Emanuel/bertweet-emotion-base
This model is a fine-tuned version of Bertweet. It achieves the following results on the evaluation set: The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emanuel/bertweet-emotion-base ### Model URL : https://huggingface.co/Emanuel/bertweet-emotion-base ### Model Description : This model is a fine-tuned version of Bertweet. It achieves the following results on the evaluation set: The following hyperparameters were used during training:
Emanuel/roebrta-base-val-test
https://huggingface.co/Emanuel/roebrta-base-val-test
This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emanuel/roebrta-base-val-test ### Model URL : https://huggingface.co/Emanuel/roebrta-base-val-test ### Model Description : This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Emanuel/twitter-emotion-deberta-v3-base
https://huggingface.co/Emanuel/twitter-emotion-deberta-v3-base
This model is a fine-tuned version of DeBERTa-v3. It achieves the following results on the evaluation set: The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emanuel/twitter-emotion-deberta-v3-base ### Model URL : https://huggingface.co/Emanuel/twitter-emotion-deberta-v3-base ### Model Description : This model is a fine-tuned version of DeBERTa-v3. It achieves the following results on the evaluation set: The following hyperparameters were used during training:
Emclaniyi/insurance
https://huggingface.co/Emclaniyi/insurance
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emclaniyi/insurance ### Model URL : https://huggingface.co/Emclaniyi/insurance ### Model Description : No model card New: Create and edit this model card directly on the website!
Emi/distilbert-base-uncased-finetuned-squad
https://huggingface.co/Emi/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emi/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/Emi/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
Emi2160/DialoGPT-small-Neku
https://huggingface.co/Emi2160/DialoGPT-small-Neku
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emi2160/DialoGPT-small-Neku ### Model URL : https://huggingface.co/Emi2160/DialoGPT-small-Neku ### Model Description :
EmileAjar/DialoGPT-small-harrypotter
https://huggingface.co/EmileAjar/DialoGPT-small-harrypotter
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EmileAjar/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/EmileAjar/DialoGPT-small-harrypotter ### Model Description :
EmileAjar/DialoGPT-small-peppapig
https://huggingface.co/EmileAjar/DialoGPT-small-peppapig
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EmileAjar/DialoGPT-small-peppapig ### Model URL : https://huggingface.co/EmileAjar/DialoGPT-small-peppapig ### Model Description :
Emily/fyp
https://huggingface.co/Emily/fyp
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emily/fyp ### Model URL : https://huggingface.co/Emily/fyp ### Model Description : No model card New: Create and edit this model card directly on the website!
Emily/fypmodel
https://huggingface.co/Emily/fypmodel
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emily/fypmodel ### Model URL : https://huggingface.co/Emily/fypmodel ### Model Description : No model card New: Create and edit this model card directly on the website!
Emirhan/51k-finetuned-bert-model
https://huggingface.co/Emirhan/51k-finetuned-bert-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emirhan/51k-finetuned-bert-model ### Model URL : https://huggingface.co/Emirhan/51k-finetuned-bert-model ### Model Description : No model card New: Create and edit this model card directly on the website!
Emmanuel/bert-finetuned-ner-accelerate
https://huggingface.co/Emmanuel/bert-finetuned-ner-accelerate
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emmanuel/bert-finetuned-ner-accelerate ### Model URL : https://huggingface.co/Emmanuel/bert-finetuned-ner-accelerate ### Model Description : No model card New: Create and edit this model card directly on the website!
Emmanuel/bert-finetuned-ner
https://huggingface.co/Emmanuel/bert-finetuned-ner
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emmanuel/bert-finetuned-ner ### Model URL : https://huggingface.co/Emmanuel/bert-finetuned-ner ### Model Description : This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Emran/ClinicalBERT_ICD10_Categories
https://huggingface.co/Emran/ClinicalBERT_ICD10_Categories
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emran/ClinicalBERT_ICD10_Categories ### Model URL : https://huggingface.co/Emran/ClinicalBERT_ICD10_Categories ### Model Description : No model card New: Create and edit this model card directly on the website!
Emran/ClinicalBERT_ICD10_Full
https://huggingface.co/Emran/ClinicalBERT_ICD10_Full
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emran/ClinicalBERT_ICD10_Full ### Model URL : https://huggingface.co/Emran/ClinicalBERT_ICD10_Full ### Model Description : No model card New: Create and edit this model card directly on the website!
Emran/ClinicalBERT_ICD10_Full_200_epoch
https://huggingface.co/Emran/ClinicalBERT_ICD10_Full_200_epoch
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emran/ClinicalBERT_ICD10_Full_200_epoch ### Model URL : https://huggingface.co/Emran/ClinicalBERT_ICD10_Full_200_epoch ### Model Description : No model card New: Create and edit this model card directly on the website!
Emran/ClinicalBERT_description_full_ICD10_Code
https://huggingface.co/Emran/ClinicalBERT_description_full_ICD10_Code
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Emran/ClinicalBERT_description_full_ICD10_Code ### Model URL : https://huggingface.co/Emran/ClinicalBERT_description_full_ICD10_Code ### Model Description : No model card New: Create and edit this model card directly on the website!
Ender/Jfxosn
https://huggingface.co/Ender/Jfxosn
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ender/Jfxosn ### Model URL : https://huggingface.co/Ender/Jfxosn ### Model Description : No model card New: Create and edit this model card directly on the website!
Enego-Comley/SuperNeg99-1
https://huggingface.co/Enego-Comley/SuperNeg99-1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Enego-Comley/SuperNeg99-1 ### Model URL : https://huggingface.co/Enego-Comley/SuperNeg99-1 ### Model Description : No model card New: Create and edit this model card directly on the website!
Enes3774/gpt
https://huggingface.co/Enes3774/gpt
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Enes3774/gpt ### Model URL : https://huggingface.co/Enes3774/gpt ### Model Description : No model card New: Create and edit this model card directly on the website!
Enes3774/gpt2
https://huggingface.co/Enes3774/gpt2
bu benim modelim
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Enes3774/gpt2 ### Model URL : https://huggingface.co/Enes3774/gpt2 ### Model Description : bu benim modelim
EngNada/sinai-voice-ar-stt-demo-colabb
https://huggingface.co/EngNada/sinai-voice-ar-stt-demo-colabb
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EngNada/sinai-voice-ar-stt-demo-colabb ### Model URL : https://huggingface.co/EngNada/sinai-voice-ar-stt-demo-colabb ### Model Description : No model card New: Create and edit this model card directly on the website!
EngNada/wav2vec2-large-xlsr-53-demo-colab
https://huggingface.co/EngNada/wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EngNada/wav2vec2-large-xlsr-53-demo-colab ### Model URL : https://huggingface.co/EngNada/wav2vec2-large-xlsr-53-demo-colab ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
EngNada/wav2vec2-large-xlsr-53-demo1-colab
https://huggingface.co/EngNada/wav2vec2-large-xlsr-53-demo1-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EngNada/wav2vec2-large-xlsr-53-demo1-colab ### Model URL : https://huggingface.co/EngNada/wav2vec2-large-xlsr-53-demo1-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
EngNada/wav2vec2-large-xlsr-53-demo1-colab1
https://huggingface.co/EngNada/wav2vec2-large-xlsr-53-demo1-colab1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EngNada/wav2vec2-large-xlsr-53-demo1-colab1 ### Model URL : https://huggingface.co/EngNada/wav2vec2-large-xlsr-53-demo1-colab1 ### Model Description : No model card New: Create and edit this model card directly on the website!
Engin/DialoGPT-small-joshua
https://huggingface.co/Engin/DialoGPT-small-joshua
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Engin/DialoGPT-small-joshua ### Model URL : https://huggingface.co/Engin/DialoGPT-small-joshua ### Model Description : No model card New: Create and edit this model card directly on the website!
EnsarEmirali/distilbert-base-uncased-finetuned-emotion
https://huggingface.co/EnsarEmirali/distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EnsarEmirali/distilbert-base-uncased-finetuned-emotion ### Model URL : https://huggingface.co/EnsarEmirali/distilbert-base-uncased-finetuned-emotion ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Enutodu/QnA
https://huggingface.co/Enutodu/QnA
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Enutodu/QnA ### Model URL : https://huggingface.co/Enutodu/QnA ### Model Description : No model card New: Create and edit this model card directly on the website!
Eren/gpt-2-small-the-office
https://huggingface.co/Eren/gpt-2-small-the-office
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eren/gpt-2-small-the-office ### Model URL : https://huggingface.co/Eren/gpt-2-small-the-office ### Model Description : No model card New: Create and edit this model card directly on the website!
Erfan/mT5-base_Farsi_Title_Generator
https://huggingface.co/Erfan/mT5-base_Farsi_Title_Generator
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Erfan/mT5-base_Farsi_Title_Generator ### Model URL : https://huggingface.co/Erfan/mT5-base_Farsi_Title_Generator ### Model Description :
Erfan/mT5-base_Farsi_Title_Generator_plus
https://huggingface.co/Erfan/mT5-base_Farsi_Title_Generator_plus
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Erfan/mT5-base_Farsi_Title_Generator_plus ### Model URL : https://huggingface.co/Erfan/mT5-base_Farsi_Title_Generator_plus ### Model Description : No model card New: Create and edit this model card directly on the website!
Erfan/mT5-small_Farsi_Title_Generator
https://huggingface.co/Erfan/mT5-small_Farsi_Title_Generator
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Erfan/mT5-small_Farsi_Title_Generator ### Model URL : https://huggingface.co/Erfan/mT5-small_Farsi_Title_Generator ### Model Description :
ErickMMuniz/bert-base-uncased-contracts-finetuned-squad
https://huggingface.co/ErickMMuniz/bert-base-uncased-contracts-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ErickMMuniz/bert-base-uncased-contracts-finetuned-squad ### Model URL : https://huggingface.co/ErickMMuniz/bert-base-uncased-contracts-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
ErickMMuniz/distilbert-base-uncased-finetuned-squad
https://huggingface.co/ErickMMuniz/distilbert-base-uncased-finetuned-squad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ErickMMuniz/distilbert-base-uncased-finetuned-squad ### Model URL : https://huggingface.co/ErickMMuniz/distilbert-base-uncased-finetuned-squad ### Model Description : No model card New: Create and edit this model card directly on the website!
Ericles/Arcaneme
https://huggingface.co/Ericles/Arcaneme
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ericles/Arcaneme ### Model URL : https://huggingface.co/Ericles/Arcaneme ### Model Description : No model card New: Create and edit this model card directly on the website!
Erikaka/DialoGPT-small-harrypotter
https://huggingface.co/Erikaka/DialoGPT-small-harrypotter
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Erikaka/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/Erikaka/DialoGPT-small-harrypotter ### Model Description : No model card New: Create and edit this model card directly on the website!
Erikaka/DialoGPT-small-loki
https://huggingface.co/Erikaka/DialoGPT-small-loki
#Loki DialoGPT Model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Erikaka/DialoGPT-small-loki ### Model URL : https://huggingface.co/Erikaka/DialoGPT-small-loki ### Model Description : #Loki DialoGPT Model
Eris/Tytrack
https://huggingface.co/Eris/Tytrack
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eris/Tytrack ### Model URL : https://huggingface.co/Eris/Tytrack ### Model Description : No model card New: Create and edit this model card directly on the website!
ErisW/Meeee
https://huggingface.co/ErisW/Meeee
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ErisW/Meeee ### Model URL : https://huggingface.co/ErisW/Meeee ### Model Description : No model card New: Create and edit this model card directly on the website!
Eshtemele/DialoGPT-large-Michael
https://huggingface.co/Eshtemele/DialoGPT-large-Michael
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eshtemele/DialoGPT-large-Michael ### Model URL : https://huggingface.co/Eshtemele/DialoGPT-large-Michael ### Model Description : No model card New: Create and edit this model card directly on the website!
EsiLambda/distilbert-base-uncased-finetuned-ner
https://huggingface.co/EsiLambda/distilbert-base-uncased-finetuned-ner
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EsiLambda/distilbert-base-uncased-finetuned-ner ### Model URL : https://huggingface.co/EsiLambda/distilbert-base-uncased-finetuned-ner ### Model Description : No model card New: Create and edit this model card directly on the website!
Esmee/yers
https://huggingface.co/Esmee/yers
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Esmee/yers ### Model URL : https://huggingface.co/Esmee/yers ### Model Description : No model card New: Create and edit this model card directly on the website!
Essa99/wav2vec2-large-xls-r-300m-tr-colab
https://huggingface.co/Essa99/wav2vec2-large-xls-r-300m-tr-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Essa99/wav2vec2-large-xls-r-300m-tr-colab ### Model URL : https://huggingface.co/Essa99/wav2vec2-large-xls-r-300m-tr-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
EstebanGarces/dummy-model
https://huggingface.co/EstebanGarces/dummy-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EstebanGarces/dummy-model ### Model URL : https://huggingface.co/EstebanGarces/dummy-model ### Model Description : No model card New: Create and edit this model card directly on the website!
EstoyDePaso/DialoGPT-small-harrypotter
https://huggingface.co/EstoyDePaso/DialoGPT-small-harrypotter
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EstoyDePaso/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/EstoyDePaso/DialoGPT-small-harrypotter ### Model Description :
Eternally12/Such
https://huggingface.co/Eternally12/Such
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eternally12/Such ### Model URL : https://huggingface.co/Eternally12/Such ### Model Description : No model card New: Create and edit this model card directly on the website!
EthanChen0418/domain-cls-nine-classes
https://huggingface.co/EthanChen0418/domain-cls-nine-classes
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EthanChen0418/domain-cls-nine-classes ### Model URL : https://huggingface.co/EthanChen0418/domain-cls-nine-classes ### Model Description : No model card New: Create and edit this model card directly on the website!
EthanChen0418/few-shot-model-five-classes
https://huggingface.co/EthanChen0418/few-shot-model-five-classes
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EthanChen0418/few-shot-model-five-classes ### Model URL : https://huggingface.co/EthanChen0418/few-shot-model-five-classes ### Model Description : No model card New: Create and edit this model card directly on the website!
EthanChen0418/intent_cls
https://huggingface.co/EthanChen0418/intent_cls
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EthanChen0418/intent_cls ### Model URL : https://huggingface.co/EthanChen0418/intent_cls ### Model Description : No model card New: Create and edit this model card directly on the website!
EthanChen0418/seven-classed-domain-cls
https://huggingface.co/EthanChen0418/seven-classed-domain-cls
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EthanChen0418/seven-classed-domain-cls ### Model URL : https://huggingface.co/EthanChen0418/seven-classed-domain-cls ### Model Description : No model card New: Create and edit this model card directly on the website!
EthanChen0418/six-classed-domain-cls
https://huggingface.co/EthanChen0418/six-classed-domain-cls
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EthanChen0418/six-classed-domain-cls ### Model URL : https://huggingface.co/EthanChen0418/six-classed-domain-cls ### Model Description : No model card New: Create and edit this model card directly on the website!
EthonLee/Lethon202103test001
https://huggingface.co/EthonLee/Lethon202103test001
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EthonLee/Lethon202103test001 ### Model URL : https://huggingface.co/EthonLee/Lethon202103test001 ### Model Description : No model card New: Create and edit this model card directly on the website!
Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi
https://huggingface.co/Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi ### Model URL : https://huggingface.co/Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi ### Model Description : No model card New: Create and edit this model card directly on the website!
Eulalief/model_name
https://huggingface.co/Eulalief/model_name
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eulalief/model_name ### Model URL : https://huggingface.co/Eulalief/model_name ### Model Description : No model card New: Create and edit this model card directly on the website!
Eunhui/bert-base-cased-wikitext2
https://huggingface.co/Eunhui/bert-base-cased-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eunhui/bert-base-cased-wikitext2 ### Model URL : https://huggingface.co/Eunhui/bert-base-cased-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Eunji/kant
https://huggingface.co/Eunji/kant
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eunji/kant ### Model URL : https://huggingface.co/Eunji/kant ### Model Description : No model card New: Create and edit this model card directly on the website!
Eunku/KorLangModel
https://huggingface.co/Eunku/KorLangModel
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eunku/KorLangModel ### Model URL : https://huggingface.co/Eunku/KorLangModel ### Model Description : No model card New: Create and edit this model card directly on the website!
Eunooeh/mnmt_gpt2
https://huggingface.co/Eunooeh/mnmt_gpt2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eunooeh/mnmt_gpt2 ### Model URL : https://huggingface.co/Eunooeh/mnmt_gpt2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Eunooeh/test
https://huggingface.co/Eunooeh/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Eunooeh/test ### Model URL : https://huggingface.co/Eunooeh/test ### Model Description : No model card New: Create and edit this model card directly on the website!
EuropeanTurtle/DialoGPT-small-mrcobb
https://huggingface.co/EuropeanTurtle/DialoGPT-small-mrcobb
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EuropeanTurtle/DialoGPT-small-mrcobb ### Model URL : https://huggingface.co/EuropeanTurtle/DialoGPT-small-mrcobb ### Model Description :
EvaRo/roberta-base-bne-finetuned-amazon_reviews_multi
https://huggingface.co/EvaRo/roberta-base-bne-finetuned-amazon_reviews_multi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EvaRo/roberta-base-bne-finetuned-amazon_reviews_multi ### Model URL : https://huggingface.co/EvaRo/roberta-base-bne-finetuned-amazon_reviews_multi ### Model Description : No model card New: Create and edit this model card directly on the website!
Evgen/model_awara_text
https://huggingface.co/Evgen/model_awara_text
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Evgen/model_awara_text ### Model URL : https://huggingface.co/Evgen/model_awara_text ### Model Description : No model card New: Create and edit this model card directly on the website!
Evgeneus/distilbert-base-uncased-finetuned-ner
https://huggingface.co/Evgeneus/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Evgeneus/distilbert-base-uncased-finetuned-ner ### Model URL : https://huggingface.co/Evgeneus/distilbert-base-uncased-finetuned-ner ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: