Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
gokulraj/preon-whisper-tiny-trial-4
https://huggingface.co/gokulraj/preon-whisper-tiny-trial-4
This model is a fine-tuned version of openai/whisper-medium on the custom dataset dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : gokulraj/preon-whisper-tiny-trial-4 ### Model URL : https://huggingface.co/gokulraj/preon-whisper-tiny-trial-4 ### Model Description : This model is a fine-tuned version of openai/whisper-medium on the custom dataset dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
tom192180/distilbert-base-uncased_odm_zphr_0st10sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st10sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st10sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st10sd ### Model Description : No model card New: Create and edit this model card directly on the website!
ashawkey/LGM
https://huggingface.co/ashawkey/LGM
This model contains the pretrained weights for LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. LGM can generate 3D objects from image or text within 5 seconds at high-resolution based on Gaussian Splatting. The model is trained on a ~80K subset of Objaverse. For more details, please refer to our paper. To download the model: Please refer to our repo for more details on loading and inference.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ashawkey/LGM ### Model URL : https://huggingface.co/ashawkey/LGM ### Model Description : This model contains the pretrained weights for LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. LGM can generate 3D objects from image or text within 5 seconds at high-resolution based on Gaussian Splatting. The model is trained on a ~80K subset of Objaverse. For more details, please refer to our paper. To download the model: Please refer to our repo for more details on loading and inference.
SolaireOfTheSun/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned
https://huggingface.co/SolaireOfTheSun/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : SolaireOfTheSun/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned ### Model URL : https://huggingface.co/SolaireOfTheSun/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned ### Model Description : No model card New: Create and edit this model card directly on the website!
varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3
https://huggingface.co/varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3
This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3 ### Model URL : https://huggingface.co/varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3 ### Model Description : This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
varun-v-rao/opt-1.3b-lora-3.15M-snli-model3
https://huggingface.co/varun-v-rao/opt-1.3b-lora-3.15M-snli-model3
This model is a fine-tuned version of facebook/opt-1.3b on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : varun-v-rao/opt-1.3b-lora-3.15M-snli-model3 ### Model URL : https://huggingface.co/varun-v-rao/opt-1.3b-lora-3.15M-snli-model3 ### Model Description : This model is a fine-tuned version of facebook/opt-1.3b on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
gokulraj/whisper-small-trail-5-preon
https://huggingface.co/gokulraj/whisper-small-trail-5-preon
This model is a fine-tuned version of openai/whisper-small on the custom dataset dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : gokulraj/whisper-small-trail-5-preon ### Model URL : https://huggingface.co/gokulraj/whisper-small-trail-5-preon ### Model Description : This model is a fine-tuned version of openai/whisper-small on the custom dataset dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
chathuranga-jayanath/codet5-small-v21
https://huggingface.co/chathuranga-jayanath/codet5-small-v21
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : chathuranga-jayanath/codet5-small-v21 ### Model URL : https://huggingface.co/chathuranga-jayanath/codet5-small-v21 ### Model Description : No model card New: Create and edit this model card directly on the website!
GSalimp/ChatUFOPTreinado
https://huggingface.co/GSalimp/ChatUFOPTreinado
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : GSalimp/ChatUFOPTreinado ### Model URL : https://huggingface.co/GSalimp/ChatUFOPTreinado ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
lecslab/byt5-translation-all_st_unseg-v2
https://huggingface.co/lecslab/byt5-translation-all_st_unseg-v2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : lecslab/byt5-translation-all_st_unseg-v2 ### Model URL : https://huggingface.co/lecslab/byt5-translation-all_st_unseg-v2 ### Model Description : No model card New: Create and edit this model card directly on the website!
tom192180/distilbert-base-uncased_odm_zphr_0st11sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st11sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st11sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st11sd ### Model Description : No model card New: Create and edit this model card directly on the website!
varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2
https://huggingface.co/varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2
This model is a fine-tuned version of bert-large-cased on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2 ### Model URL : https://huggingface.co/varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2 ### Model Description : This model is a fine-tuned version of bert-large-cased on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
contenfire/antrea_issue_v1
https://huggingface.co/contenfire/antrea_issue_v1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : contenfire/antrea_issue_v1 ### Model URL : https://huggingface.co/contenfire/antrea_issue_v1 ### Model Description : No model card New: Create and edit this model card directly on the website!
ankhamun/IIIIIxII0-0IIxIIIII
https://huggingface.co/ankhamun/IIIIIxII0-0IIxIIIII
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ankhamun/IIIIIxII0-0IIxIIIII ### Model URL : https://huggingface.co/ankhamun/IIIIIxII0-0IIxIIIII ### Model Description : No model card New: Create and edit this model card directly on the website!
tyzhu/lmind_hotpot_train8000_eval7405_v1_qa_gpt2-xl
https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_qa_gpt2-xl
This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_hotpot_train8000_eval7405_v1_qa dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tyzhu/lmind_hotpot_train8000_eval7405_v1_qa_gpt2-xl ### Model URL : https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_qa_gpt2-xl ### Model Description : This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_hotpot_train8000_eval7405_v1_qa dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_gpt2-xl
https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_gpt2-xl
This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_gpt2-xl ### Model URL : https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_gpt2-xl ### Model Description : This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
tyzhu/lmind_hotpot_train8000_eval7405_v1_docidx_gpt2-xl
https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_docidx_gpt2-xl
This model is a fine-tuned version of gpt2-xl on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tyzhu/lmind_hotpot_train8000_eval7405_v1_docidx_gpt2-xl ### Model URL : https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_docidx_gpt2-xl ### Model Description : This model is a fine-tuned version of gpt2-xl on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
mathreader/ppo-LunarLander-v2
https://huggingface.co/mathreader/ppo-LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : mathreader/ppo-LunarLander-v2 ### Model URL : https://huggingface.co/mathreader/ppo-LunarLander-v2 ### Model Description : This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
RMarvinMT/PoliticaYeconomia
https://huggingface.co/RMarvinMT/PoliticaYeconomia
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : RMarvinMT/PoliticaYeconomia ### Model URL : https://huggingface.co/RMarvinMT/PoliticaYeconomia ### Model Description :
blaze999/finetuned-ner-conll
https://huggingface.co/blaze999/finetuned-ner-conll
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : blaze999/finetuned-ner-conll ### Model URL : https://huggingface.co/blaze999/finetuned-ner-conll ### Model Description : This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
thrunlab/sparse_sparse_80_percent_pretraining_warmup
https://huggingface.co/thrunlab/sparse_sparse_80_percent_pretraining_warmup
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : thrunlab/sparse_sparse_80_percent_pretraining_warmup ### Model URL : https://huggingface.co/thrunlab/sparse_sparse_80_percent_pretraining_warmup ### Model Description : No model card New: Create and edit this model card directly on the website!
chenhugging/mistral-7b-medqa-v1
https://huggingface.co/chenhugging/mistral-7b-medqa-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medical_meadow_medqa dataset. The following hyperparameters were used during training: hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medqa-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : chenhugging/mistral-7b-medqa-v1 ### Model URL : https://huggingface.co/chenhugging/mistral-7b-medqa-v1 ### Model Description : This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medical_meadow_medqa dataset. The following hyperparameters were used during training: hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medqa-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
tom192180/distilbert-base-uncased_odm_zphr_0st12sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st12sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st12sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st12sd ### Model Description : No model card New: Create and edit this model card directly on the website!
Sacbe/ViT_SAM_Classification
https://huggingface.co/Sacbe/ViT_SAM_Classification
El modelo fue entrenado usando el modelo base de VisionTransformer junto con el optimizador SAM de Google y la función de perdida Negative log likelihood, sobre los datos Wildfire. Los resultados muestran que el clasificador alcanzó una precisión del 97% con solo 10 épocas de entrenamiento. La teoría de se muestra a continuación. Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class. [1] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”. arXiv, el 3 de junio de 2021. Consultado: el 12 de noviembre de 2023. [En línea]. Disponible en: http://arxiv.org/abs/2010.11929 SAM simultaneously minimizes loss value and loss sharpness. In particular, it seeks parameters that lie in neighborhoods having uniformly low loss. SAM improves model generalization and yields SoTA performance for several datasets. Additionally, it provides robustness to label noise on par with that provided by SoTA procedures that specifically target learning with noisy labels. ResNet loss landscape at the end of training with and without SAM. Sharpness-aware updates lead to a significantly wider minimum, which then leads to better generalization properties. [2] P. Foret, A. Kleiner, y H. Mobahi, “Sharpness-Aware Minimization For Efficiently Improving Generalization”, 2021. It is useful to train a classification problem with $C$ classes. If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set. The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch, $C$ ) or ( minibatch, $C, d_1, d_2, \ldots, d_K$ ) with $K \geq 1$ for the $K$-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images. Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer. The target that this loss expects should be a class index in the range $[0, C-1]$ where $C$ number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range). The unreduced (i.e. with reduction set to 'none ') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynxn,yn,wc= weight [c]⋅1 \ell(x, y)=L=\left\{l_1, \ldots, l_N\right\}^{\top}, \quad l_n=-w_{y_n} x_{n, y_n}, \quad w_c=\text { weight }[c] \cdot 1 ℓ(x,y)=L={l1​,…,lN​}⊤,ln​=−wyn​​xn,yn​​,wc​= weight [c]⋅1 where $x$ is the input, $y$ is the target, $w$ is the weight, and $N$ is the batch size. If reduction is not 'none' (default 'mean'), then ℓ(x,y)={∑n=1N1∑n=1Nwynln, if reduction = ’mean’ ∑n=1Nln, if reduction = ’sum’  \ell(x, y)= \begin{cases}\sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text { if reduction }=\text { 'mean' } \\ \sum_{n=1}^N l_n, & \text { if reduction }=\text { 'sum' }\end{cases} ℓ(x,y)={∑n=1N​∑n=1N​wyn​​1​ln​,∑n=1N​ln​,​ if reduction = ’mean’  if reduction = ’sum’ ​
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Sacbe/ViT_SAM_Classification ### Model URL : https://huggingface.co/Sacbe/ViT_SAM_Classification ### Model Description : El modelo fue entrenado usando el modelo base de VisionTransformer junto con el optimizador SAM de Google y la función de perdida Negative log likelihood, sobre los datos Wildfire. Los resultados muestran que el clasificador alcanzó una precisión del 97% con solo 10 épocas de entrenamiento. La teoría de se muestra a continuación. Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class. [1] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”. arXiv, el 3 de junio de 2021. Consultado: el 12 de noviembre de 2023. [En línea]. Disponible en: http://arxiv.org/abs/2010.11929 SAM simultaneously minimizes loss value and loss sharpness. In particular, it seeks parameters that lie in neighborhoods having uniformly low loss. SAM improves model generalization and yields SoTA performance for several datasets. Additionally, it provides robustness to label noise on par with that provided by SoTA procedures that specifically target learning with noisy labels. ResNet loss landscape at the end of training with and without SAM. Sharpness-aware updates lead to a significantly wider minimum, which then leads to better generalization properties. [2] P. Foret, A. Kleiner, y H. Mobahi, “Sharpness-Aware Minimization For Efficiently Improving Generalization”, 2021. It is useful to train a classification problem with $C$ classes. If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set. The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch, $C$ ) or ( minibatch, $C, d_1, d_2, \ldots, d_K$ ) with $K \geq 1$ for the $K$-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images. Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer. The target that this loss expects should be a class index in the range $[0, C-1]$ where $C$ number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range). The unreduced (i.e. with reduction set to 'none ') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynxn,yn,wc= weight [c]⋅1 \ell(x, y)=L=\left\{l_1, \ldots, l_N\right\}^{\top}, \quad l_n=-w_{y_n} x_{n, y_n}, \quad w_c=\text { weight }[c] \cdot 1 ℓ(x,y)=L={l1​,…,lN​}⊤,ln​=−wyn​​xn,yn​​,wc​= weight [c]⋅1 where $x$ is the input, $y$ is the target, $w$ is the weight, and $N$ is the batch size. If reduction is not 'none' (default 'mean'), then ℓ(x,y)={∑n=1N1∑n=1Nwynln, if reduction = ’mean’ ∑n=1Nln, if reduction = ’sum’  \ell(x, y)= \begin{cases}\sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text { if reduction }=\text { 'mean' } \\ \sum_{n=1}^N l_n, & \text { if reduction }=\text { 'sum' }\end{cases} ℓ(x,y)={∑n=1N​∑n=1N​wyn​​1​ln​,∑n=1N​ln​,​ if reduction = ’mean’  if reduction = ’sum’ ​
tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa_gpt2-xl
https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa_gpt2-xl
This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa_gpt2-xl ### Model URL : https://huggingface.co/tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa_gpt2-xl ### Model Description : This model is a fine-tuned version of gpt2-xl on the tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Krisbiantoro/mixtral-id-chatml-700
https://huggingface.co/Krisbiantoro/mixtral-id-chatml-700
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Krisbiantoro/mixtral-id-chatml-700 ### Model URL : https://huggingface.co/Krisbiantoro/mixtral-id-chatml-700 ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
deepnetguy/e97c9ebe-zeta
https://huggingface.co/deepnetguy/e97c9ebe-zeta
Failed to access https://huggingface.co/deepnetguy/e97c9ebe-zeta - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : deepnetguy/e97c9ebe-zeta ### Model URL : https://huggingface.co/deepnetguy/e97c9ebe-zeta ### Model Description : Failed to access https://huggingface.co/deepnetguy/e97c9ebe-zeta - HTTP Status Code: 404
mertbozkir/mistral-gsm8k-finetune
https://huggingface.co/mertbozkir/mistral-gsm8k-finetune
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : mertbozkir/mistral-gsm8k-finetune ### Model URL : https://huggingface.co/mertbozkir/mistral-gsm8k-finetune ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
tom192180/distilbert-base-uncased_odm_zphr_0st13sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st13sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st13sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st13sd ### Model Description : No model card New: Create and edit this model card directly on the website!
Chillarmo/whisper-small-hy-AM
https://huggingface.co/Chillarmo/whisper-small-hy-AM
Chillarmo/whisper-small-hy-AM is an AI model designed for speech-to-text conversion specifically tailored for the Armenian language. Leveraging the power of fine-tuning, this model, named whisper-small-hy-AM, is based on openai/whisper-small and trained on the common_voice_16_1 dataset. It achieves the following results on the evaluation set: The training data consists of Mozilla Common Voice version 16.1. Plans for future improvements include continuing the training process and integrating an additional 10 hours of data from datasets such as google/fleurs and possibly google/xtreme_s. Despite its current performance, efforts are underway to further reduce the WER. The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Chillarmo/whisper-small-hy-AM ### Model URL : https://huggingface.co/Chillarmo/whisper-small-hy-AM ### Model Description : Chillarmo/whisper-small-hy-AM is an AI model designed for speech-to-text conversion specifically tailored for the Armenian language. Leveraging the power of fine-tuning, this model, named whisper-small-hy-AM, is based on openai/whisper-small and trained on the common_voice_16_1 dataset. It achieves the following results on the evaluation set: The training data consists of Mozilla Common Voice version 16.1. Plans for future improvements include continuing the training process and integrating an additional 10 hours of data from datasets such as google/fleurs and possibly google/xtreme_s. Despite its current performance, efforts are underway to further reduce the WER. The following hyperparameters were used during training:
jyoung105/albedobase-xl-v21
https://huggingface.co/jyoung105/albedobase-xl-v21
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jyoung105/albedobase-xl-v21 ### Model URL : https://huggingface.co/jyoung105/albedobase-xl-v21 ### Model Description : No model card New: Create and edit this model card directly on the website!
tyson0420/stack_llama-clang
https://huggingface.co/tyson0420/stack_llama-clang
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tyson0420/stack_llama-clang ### Model URL : https://huggingface.co/tyson0420/stack_llama-clang ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
badmonk/aimiyoshikawa
https://huggingface.co/badmonk/aimiyoshikawa
Use the code below to get started with the model.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : badmonk/aimiyoshikawa ### Model URL : https://huggingface.co/badmonk/aimiyoshikawa ### Model Description : Use the code below to get started with the model.
tom192180/distilbert-base-uncased_odm_zphr_0st14sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st14sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st14sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st14sd ### Model Description : No model card New: Create and edit this model card directly on the website!
janhq/stealth-finance-v1-GGUF
https://huggingface.co/janhq/stealth-finance-v1-GGUF
Jan - Discord This is a GGUF version of jan-hq/stealth-finance-v1 Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : janhq/stealth-finance-v1-GGUF ### Model URL : https://huggingface.co/janhq/stealth-finance-v1-GGUF ### Model Description : Jan - Discord This is a GGUF version of jan-hq/stealth-finance-v1 Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
Vanns/Vannz
https://huggingface.co/Vanns/Vannz
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Vanns/Vannz ### Model URL : https://huggingface.co/Vanns/Vannz ### Model Description : No model card New: Create and edit this model card directly on the website!
samiabat/my-lora-model-10
https://huggingface.co/samiabat/my-lora-model-10
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : samiabat/my-lora-model-10 ### Model URL : https://huggingface.co/samiabat/my-lora-model-10 ### Model Description : No model card New: Create and edit this model card directly on the website!
mauricett/lichess_sf
https://huggingface.co/mauricett/lichess_sf
Failed to access https://huggingface.co/mauricett/lichess_sf - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : mauricett/lichess_sf ### Model URL : https://huggingface.co/mauricett/lichess_sf ### Model Description : Failed to access https://huggingface.co/mauricett/lichess_sf - HTTP Status Code: 404
ctsy/drone-codes-model
https://huggingface.co/ctsy/drone-codes-model
Failed to access https://huggingface.co/ctsy/drone-codes-model - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ctsy/drone-codes-model ### Model URL : https://huggingface.co/ctsy/drone-codes-model ### Model Description : Failed to access https://huggingface.co/ctsy/drone-codes-model - HTTP Status Code: 404
dzagardo/gcp_test_v2
https://huggingface.co/dzagardo/gcp_test_v2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : dzagardo/gcp_test_v2 ### Model URL : https://huggingface.co/dzagardo/gcp_test_v2 ### Model Description : No model card New: Create and edit this model card directly on the website!
car13mesquita/bert-finetuned-sem_eval-rest14-english-2
https://huggingface.co/car13mesquita/bert-finetuned-sem_eval-rest14-english-2
This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : car13mesquita/bert-finetuned-sem_eval-rest14-english-2 ### Model URL : https://huggingface.co/car13mesquita/bert-finetuned-sem_eval-rest14-english-2 ### Model Description : This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
vinhtran2611/imdb
https://huggingface.co/vinhtran2611/imdb
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : vinhtran2611/imdb ### Model URL : https://huggingface.co/vinhtran2611/imdb ### Model Description : No model card New: Create and edit this model card directly on the website!
tom192180/distilbert-base-uncased_odm_zphr_0st15sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st15sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st15sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st15sd ### Model Description : No model card New: Create and edit this model card directly on the website!
Peverell/mnist-resnet18
https://huggingface.co/Peverell/mnist-resnet18
Dataset: MNIST Model-architecture: ResNet-18 training accuracy: 0.9988 testing accuracy: 0.9934
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Peverell/mnist-resnet18 ### Model URL : https://huggingface.co/Peverell/mnist-resnet18 ### Model Description : Dataset: MNIST Model-architecture: ResNet-18 training accuracy: 0.9988 testing accuracy: 0.9934
armaanp/clean-gpt-wikitext2
https://huggingface.co/armaanp/clean-gpt-wikitext2
Failed to access https://huggingface.co/armaanp/clean-gpt-wikitext2 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : armaanp/clean-gpt-wikitext2 ### Model URL : https://huggingface.co/armaanp/clean-gpt-wikitext2 ### Model Description : Failed to access https://huggingface.co/armaanp/clean-gpt-wikitext2 - HTTP Status Code: 404
kaist-ai/prometheus-7b-v1.9-beta-1
https://huggingface.co/kaist-ai/prometheus-7b-v1.9-beta-1
Failed to access https://huggingface.co/kaist-ai/prometheus-7b-v1.9-beta-1 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : kaist-ai/prometheus-7b-v1.9-beta-1 ### Model URL : https://huggingface.co/kaist-ai/prometheus-7b-v1.9-beta-1 ### Model Description : Failed to access https://huggingface.co/kaist-ai/prometheus-7b-v1.9-beta-1 - HTTP Status Code: 404
jetaudio/novel_zh2vi_seallm
https://huggingface.co/jetaudio/novel_zh2vi_seallm
This model is a fine-tuned version of SeaLLMs/SeaLLM-7B-v2 on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jetaudio/novel_zh2vi_seallm ### Model URL : https://huggingface.co/jetaudio/novel_zh2vi_seallm ### Model Description : This model is a fine-tuned version of SeaLLMs/SeaLLM-7B-v2 on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
macadeliccc/Smaug-34b-v0.1-slerp
https://huggingface.co/macadeliccc/Smaug-34b-v0.1-slerp
Failed to access https://huggingface.co/macadeliccc/Smaug-34b-v0.1-slerp - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : macadeliccc/Smaug-34b-v0.1-slerp ### Model URL : https://huggingface.co/macadeliccc/Smaug-34b-v0.1-slerp ### Model Description : Failed to access https://huggingface.co/macadeliccc/Smaug-34b-v0.1-slerp - HTTP Status Code: 404
tom192180/distilbert-base-uncased_odm_zphr_0st16sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st16sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st16sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st16sd ### Model Description : No model card New: Create and edit this model card directly on the website!
mach-12/t5-small-finetuned-mlsum-de
https://huggingface.co/mach-12/t5-small-finetuned-mlsum-de
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : mach-12/t5-small-finetuned-mlsum-de ### Model URL : https://huggingface.co/mach-12/t5-small-finetuned-mlsum-de ### Model Description : This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
LoneStriker/DeepMagic-Coder-7b-GGUF
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-GGUF
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-GGUF ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-GGUF ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
MarkusLiu/Revised-lamma
https://huggingface.co/MarkusLiu/Revised-lamma
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : MarkusLiu/Revised-lamma ### Model URL : https://huggingface.co/MarkusLiu/Revised-lamma ### Model Description : No model card New: Create and edit this model card directly on the website!
Unplanted2107/llama-chat-dolly
https://huggingface.co/Unplanted2107/llama-chat-dolly
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Unplanted2107/llama-chat-dolly ### Model URL : https://huggingface.co/Unplanted2107/llama-chat-dolly ### Model Description : No model card New: Create and edit this model card directly on the website!
Fihade/Qwen1-5-7b-gguf
https://huggingface.co/Fihade/Qwen1-5-7b-gguf
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fihade/Qwen1-5-7b-gguf ### Model URL : https://huggingface.co/Fihade/Qwen1-5-7b-gguf ### Model Description :
vinhtran2611/tmp
https://huggingface.co/vinhtran2611/tmp
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : vinhtran2611/tmp ### Model URL : https://huggingface.co/vinhtran2611/tmp ### Model Description : No model card New: Create and edit this model card directly on the website!
andrealexroom/LexLLMv0.0.0.x.10.4.1.1
https://huggingface.co/andrealexroom/LexLLMv0.0.0.x.10.4.1.1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : andrealexroom/LexLLMv0.0.0.x.10.4.1.1 ### Model URL : https://huggingface.co/andrealexroom/LexLLMv0.0.0.x.10.4.1.1 ### Model Description : No model card New: Create and edit this model card directly on the website!
superfriends/megadog-v3
https://huggingface.co/superfriends/megadog-v3
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : superfriends/megadog-v3 ### Model URL : https://huggingface.co/superfriends/megadog-v3 ### Model Description : No model card New: Create and edit this model card directly on the website!
tom192180/distilbert-base-uncased_odm_zphr_0st17sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st17sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st17sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st17sd ### Model Description : No model card New: Create and edit this model card directly on the website!
niautami/Flan-t5-small-custom
https://huggingface.co/niautami/Flan-t5-small-custom
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : niautami/Flan-t5-small-custom ### Model URL : https://huggingface.co/niautami/Flan-t5-small-custom ### Model Description : No model card New: Create and edit this model card directly on the website!
tom192180/distilbert-base-uncased_odm_zphr_0st18sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st18sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st18sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st18sd ### Model Description : No model card New: Create and edit this model card directly on the website!
gotchu/season-8-v2-solar
https://huggingface.co/gotchu/season-8-v2-solar
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: The following YAML configuration was used to produce this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : gotchu/season-8-v2-solar ### Model URL : https://huggingface.co/gotchu/season-8-v2-solar ### Model Description : This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: The following YAML configuration was used to produce this model:
Legalaz/5DHjSTWSbZGJMuoQy4xcDUfCBCoZUJFysxhKXtTsujxBpkwe_vgg
https://huggingface.co/Legalaz/5DHjSTWSbZGJMuoQy4xcDUfCBCoZUJFysxhKXtTsujxBpkwe_vgg
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Legalaz/5DHjSTWSbZGJMuoQy4xcDUfCBCoZUJFysxhKXtTsujxBpkwe_vgg ### Model URL : https://huggingface.co/Legalaz/5DHjSTWSbZGJMuoQy4xcDUfCBCoZUJFysxhKXtTsujxBpkwe_vgg ### Model Description : No model card New: Create and edit this model card directly on the website!
incomprehensible/009
https://huggingface.co/incomprehensible/009
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : incomprehensible/009 ### Model URL : https://huggingface.co/incomprehensible/009 ### Model Description : No model card New: Create and edit this model card directly on the website!
cloudyu/60B-MoE-Coder-v2
https://huggingface.co/cloudyu/60B-MoE-Coder-v2
this is 4bit 60B MoE model trained by SFTTrainer based on [cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO] nampdn-ai/tiny-codes sampling about 2000 cases Metrics not Test code example
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : cloudyu/60B-MoE-Coder-v2 ### Model URL : https://huggingface.co/cloudyu/60B-MoE-Coder-v2 ### Model Description : this is 4bit 60B MoE model trained by SFTTrainer based on [cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO] nampdn-ai/tiny-codes sampling about 2000 cases Metrics not Test code example
CharlieGamer717/SkilletNoRoboticVoice
https://huggingface.co/CharlieGamer717/SkilletNoRoboticVoice
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : CharlieGamer717/SkilletNoRoboticVoice ### Model URL : https://huggingface.co/CharlieGamer717/SkilletNoRoboticVoice ### Model Description :
Bluebomber182/Chris-Pine-RVC-Model
https://huggingface.co/Bluebomber182/Chris-Pine-RVC-Model
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Bluebomber182/Chris-Pine-RVC-Model ### Model URL : https://huggingface.co/Bluebomber182/Chris-Pine-RVC-Model ### Model Description :
taeseo06/Yolov7-KnifeDetectionFinetuningModel
https://huggingface.co/taeseo06/Yolov7-KnifeDetectionFinetuningModel
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : taeseo06/Yolov7-KnifeDetectionFinetuningModel ### Model URL : https://huggingface.co/taeseo06/Yolov7-KnifeDetectionFinetuningModel ### Model Description :
deepnetguy/zeta-4
https://huggingface.co/deepnetguy/zeta-4
Failed to access https://huggingface.co/deepnetguy/zeta-4 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : deepnetguy/zeta-4 ### Model URL : https://huggingface.co/deepnetguy/zeta-4 ### Model Description : Failed to access https://huggingface.co/deepnetguy/zeta-4 - HTTP Status Code: 404
trinath/LunarLander-v5
https://huggingface.co/trinath/LunarLander-v5
This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : trinath/LunarLander-v5 ### Model URL : https://huggingface.co/trinath/LunarLander-v5 ### Model Description : This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
tom192180/distilbert-base-uncased_odm_zphr_0st19sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st19sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st19sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st19sd ### Model Description : No model card New: Create and edit this model card directly on the website!
eagle0504/warren-buffett-annual-letters-from-1977-to-2019
https://huggingface.co/eagle0504/warren-buffett-annual-letters-from-1977-to-2019
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : eagle0504/warren-buffett-annual-letters-from-1977-to-2019 ### Model URL : https://huggingface.co/eagle0504/warren-buffett-annual-letters-from-1977-to-2019 ### Model Description : No model card New: Create and edit this model card directly on the website!
LoneStriker/DeepMagic-Coder-7b-3.0bpw-h6-exl2
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-3.0bpw-h6-exl2
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-3.0bpw-h6-exl2 ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-3.0bpw-h6-exl2 ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
karunmv/my_awesome_opus_books_model
https://huggingface.co/karunmv/my_awesome_opus_books_model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : karunmv/my_awesome_opus_books_model ### Model URL : https://huggingface.co/karunmv/my_awesome_opus_books_model ### Model Description : No model card New: Create and edit this model card directly on the website!
theofcks/MATUE30PRAUM
https://huggingface.co/theofcks/MATUE30PRAUM
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : theofcks/MATUE30PRAUM ### Model URL : https://huggingface.co/theofcks/MATUE30PRAUM ### Model Description :
LoneStriker/DeepMagic-Coder-7b-4.0bpw-h6-exl2
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-4.0bpw-h6-exl2
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-4.0bpw-h6-exl2 ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-4.0bpw-h6-exl2 ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
zerobig86/glaucoma-clasification
https://huggingface.co/zerobig86/glaucoma-clasification
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : zerobig86/glaucoma-clasification ### Model URL : https://huggingface.co/zerobig86/glaucoma-clasification ### Model Description : No model card New: Create and edit this model card directly on the website!
CHATHISTORY/0.5B-Model-1
https://huggingface.co/CHATHISTORY/0.5B-Model-1
This is a model uploaded by Markus Liu (liuyu), a trial to use a 0.5b language model.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : CHATHISTORY/0.5B-Model-1 ### Model URL : https://huggingface.co/CHATHISTORY/0.5B-Model-1 ### Model Description : This is a model uploaded by Markus Liu (liuyu), a trial to use a 0.5b language model.
nightdude/ddpm-butterflies-128
https://huggingface.co/nightdude/ddpm-butterflies-128
These are LoRA adaption weights for anton_l/ddpm-butterflies-128. The weights were fine-tuned on the huggan/smithsonian_butterflies_subset dataset. You can find some example images in the following.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : nightdude/ddpm-butterflies-128 ### Model URL : https://huggingface.co/nightdude/ddpm-butterflies-128 ### Model Description : These are LoRA adaption weights for anton_l/ddpm-butterflies-128. The weights were fine-tuned on the huggan/smithsonian_butterflies_subset dataset. You can find some example images in the following.
LoneStriker/DeepMagic-Coder-7b-5.0bpw-h6-exl2
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-5.0bpw-h6-exl2
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-5.0bpw-h6-exl2 ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-5.0bpw-h6-exl2 ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Amrinkar/CartModel2
https://huggingface.co/Amrinkar/CartModel2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Amrinkar/CartModel2 ### Model URL : https://huggingface.co/Amrinkar/CartModel2 ### Model Description : No model card New: Create and edit this model card directly on the website!
heshamourad/marian-finetuned-kde4-en-to-fr
https://huggingface.co/heshamourad/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : heshamourad/marian-finetuned-kde4-en-to-fr ### Model URL : https://huggingface.co/heshamourad/marian-finetuned-kde4-en-to-fr ### Model Description : This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
LoneStriker/DeepMagic-Coder-7b-6.0bpw-h6-exl2
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-6.0bpw-h6-exl2
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-6.0bpw-h6-exl2 ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-6.0bpw-h6-exl2 ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Aryanne/TinyMix-1.1B
https://huggingface.co/Aryanne/TinyMix-1.1B
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aryanne/TinyMix-1.1B ### Model URL : https://huggingface.co/Aryanne/TinyMix-1.1B ### Model Description : No model card New: Create and edit this model card directly on the website!
chenhugging/mistral-7b-medmcqa-inst-v1
https://huggingface.co/chenhugging/mistral-7b-medmcqa-inst-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medmcqa_instruct dataset. The following hyperparameters were used during training: hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medmcqa-inst-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1 hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : chenhugging/mistral-7b-medmcqa-inst-v1 ### Model URL : https://huggingface.co/chenhugging/mistral-7b-medmcqa-inst-v1 ### Model Description : This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medmcqa_instruct dataset. The following hyperparameters were used during training: hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medmcqa-inst-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1 hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
tom192180/distilbert-base-uncased_odm_zphr_0st20sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st20sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st20sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st20sd ### Model Description : No model card New: Create and edit this model card directly on the website!
LoneStriker/DeepMagic-Coder-7b-8.0bpw-h8-exl2
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-8.0bpw-h8-exl2
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-8.0bpw-h8-exl2 ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-8.0bpw-h8-exl2 ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
abhiparspec/phi2-qlora1
https://huggingface.co/abhiparspec/phi2-qlora1
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : abhiparspec/phi2-qlora1 ### Model URL : https://huggingface.co/abhiparspec/phi2-qlora1 ### Model Description :
abertoooo/evadiosa
https://huggingface.co/abertoooo/evadiosa
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : abertoooo/evadiosa ### Model URL : https://huggingface.co/abertoooo/evadiosa ### Model Description : No model card New: Create and edit this model card directly on the website!
tom192180/distilbert-base-uncased_odm_zphr_0st21sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st21sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st21sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st21sd ### Model Description : No model card New: Create and edit this model card directly on the website!
andysalerno/rainbowfish-v6
https://huggingface.co/andysalerno/rainbowfish-v6
This is a sft of andysalerno/mistral-sft-v3. It uses a dataset andysalerno/rainbowfish-v1, a filtered combination of Nectar, Glaive, Ultrachat, and Distilmath. It uses the ChatML format natively, with special tokens added at the model level and tokenizer level. Testing shows it follows the ChatML format reliably. The plan is to further tune this model with DPO to improve chat quality. Another version, tuned over 2 epochs instead of 1, is also planned. 4x A6000 for ~4 hours. See the axolotl.yaml file for details on the training config.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : andysalerno/rainbowfish-v6 ### Model URL : https://huggingface.co/andysalerno/rainbowfish-v6 ### Model Description : This is a sft of andysalerno/mistral-sft-v3. It uses a dataset andysalerno/rainbowfish-v1, a filtered combination of Nectar, Glaive, Ultrachat, and Distilmath. It uses the ChatML format natively, with special tokens added at the model level and tokenizer level. Testing shows it follows the ChatML format reliably. The plan is to further tune this model with DPO to improve chat quality. Another version, tuned over 2 epochs instead of 1, is also planned. 4x A6000 for ~4 hours. See the axolotl.yaml file for details on the training config.
kxx-kkk/FYP_deberta-v3-base_adversarialQA
https://huggingface.co/kxx-kkk/FYP_deberta-v3-base_adversarialQA
This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : kxx-kkk/FYP_deberta-v3-base_adversarialQA ### Model URL : https://huggingface.co/kxx-kkk/FYP_deberta-v3-base_adversarialQA ### Model Description : This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Maycol56v/Sara
https://huggingface.co/Maycol56v/Sara
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Maycol56v/Sara ### Model URL : https://huggingface.co/Maycol56v/Sara ### Model Description : No model card New: Create and edit this model card directly on the website!
deepnetguy/fa1d9006
https://huggingface.co/deepnetguy/fa1d9006
Failed to access https://huggingface.co/deepnetguy/fa1d9006 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : deepnetguy/fa1d9006 ### Model URL : https://huggingface.co/deepnetguy/fa1d9006 ### Model Description : Failed to access https://huggingface.co/deepnetguy/fa1d9006 - HTTP Status Code: 404
gotchu/s8-solar-merge
https://huggingface.co/gotchu/s8-solar-merge
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: The following YAML configuration was used to produce this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : gotchu/s8-solar-merge ### Model URL : https://huggingface.co/gotchu/s8-solar-merge ### Model Description : This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: The following YAML configuration was used to produce this model:
LoneStriker/DeepMagic-Coder-7b-AWQ
https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-AWQ
DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/DeepMagic-Coder-7b-AWQ ### Model URL : https://huggingface.co/LoneStriker/DeepMagic-Coder-7b-AWQ ### Model Description : DeepMagic-Coder-7b Alternate version: This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing). This is the first of my models to use the merge-kits task_arithmetic merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base: Task Arithmetic: The original models used in this merge can be found here: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct The Merge was created using Mergekit and the paremeters can be found bellow:
Nogayara/leomicrofonebom
https://huggingface.co/Nogayara/leomicrofonebom
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Nogayara/leomicrofonebom ### Model URL : https://huggingface.co/Nogayara/leomicrofonebom ### Model Description : No model card New: Create and edit this model card directly on the website!
bianxg/q-FrozenLake-v1-4x4-noSlippery
https://huggingface.co/bianxg/q-FrozenLake-v1-4x4-noSlippery
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : bianxg/q-FrozenLake-v1-4x4-noSlippery ### Model URL : https://huggingface.co/bianxg/q-FrozenLake-v1-4x4-noSlippery ### Model Description : This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
tom192180/distilbert-base-uncased_odm_zphr_0st22sd
https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st22sd
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tom192180/distilbert-base-uncased_odm_zphr_0st22sd ### Model URL : https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st22sd ### Model Description : No model card New: Create and edit this model card directly on the website!
humung/koalpaca-polyglot-12.8B-lora-vlending-v0.1
https://huggingface.co/humung/koalpaca-polyglot-12.8B-lora-vlending-v0.1
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : humung/koalpaca-polyglot-12.8B-lora-vlending-v0.1 ### Model URL : https://huggingface.co/humung/koalpaca-polyglot-12.8B-lora-vlending-v0.1 ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
frntcx/Reinforce
https://huggingface.co/frntcx/Reinforce
This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : frntcx/Reinforce ### Model URL : https://huggingface.co/frntcx/Reinforce ### Model Description : This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction