Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
shnl/llama2-13b-vinewsqa
https://huggingface.co/shnl/llama2-13b-vinewsqa
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-13b-vinewsqa ### Model URL : https://huggingface.co/shnl/llama2-13b-vinewsqa ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
parvpareek/convnextv2-large-22k-384-finetuned-eurosat
https://huggingface.co/parvpareek/convnextv2-large-22k-384-finetuned-eurosat
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : parvpareek/convnextv2-large-22k-384-finetuned-eurosat ### Model URL : https://huggingface.co/parvpareek/convnextv2-large-22k-384-finetuned-eurosat ### Model Description : No model card New: Create and edit this model card directly on the website!
shazzz/ppo-LunarLander-v2
https://huggingface.co/shazzz/ppo-LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shazzz/ppo-LunarLander-v2 ### Model URL : https://huggingface.co/shazzz/ppo-LunarLander-v2 ### Model Description : This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Kalinga/dinov2-base-finetuned-oxford
https://huggingface.co/Kalinga/dinov2-base-finetuned-oxford
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Kalinga/dinov2-base-finetuned-oxford ### Model URL : https://huggingface.co/Kalinga/dinov2-base-finetuned-oxford ### Model Description : No model card New: Create and edit this model card directly on the website!
BunnyToon/nilce
https://huggingface.co/BunnyToon/nilce
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BunnyToon/nilce ### Model URL : https://huggingface.co/BunnyToon/nilce ### Model Description :
surya47/medclip-roco
https://huggingface.co/surya47/medclip-roco
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : surya47/medclip-roco ### Model URL : https://huggingface.co/surya47/medclip-roco ### Model Description :
InceptiveDev/skill_recommendation_model
https://huggingface.co/InceptiveDev/skill_recommendation_model
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : InceptiveDev/skill_recommendation_model ### Model URL : https://huggingface.co/InceptiveDev/skill_recommendation_model ### Model Description :
tndklab/wav2vec_RTSplit0207_3
https://huggingface.co/tndklab/wav2vec_RTSplit0207_3
This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-japanese on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tndklab/wav2vec_RTSplit0207_3 ### Model URL : https://huggingface.co/tndklab/wav2vec_RTSplit0207_3 ### Model Description : This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-japanese on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
shnl/llama2-7b-viquad
https://huggingface.co/shnl/llama2-7b-viquad
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-7b-viquad ### Model URL : https://huggingface.co/shnl/llama2-7b-viquad ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
rickprime/hal-420
https://huggingface.co/rickprime/hal-420
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : rickprime/hal-420 ### Model URL : https://huggingface.co/rickprime/hal-420 ### Model Description : No model card New: Create and edit this model card directly on the website!
spsither/whisper-small-v4
https://huggingface.co/spsither/whisper-small-v4
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : spsither/whisper-small-v4 ### Model URL : https://huggingface.co/spsither/whisper-small-v4 ### Model Description : No model card New: Create and edit this model card directly on the website!
shnl/llama2-13b-viquad
https://huggingface.co/shnl/llama2-13b-viquad
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-13b-viquad ### Model URL : https://huggingface.co/shnl/llama2-13b-viquad ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
super-health-skin-tag-remover/super-health-skin-tag-remover
https://huggingface.co/super-health-skin-tag-remover/super-health-skin-tag-remover
Failed to access https://huggingface.co/super-health-skin-tag-remover/super-health-skin-tag-remover - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : super-health-skin-tag-remover/super-health-skin-tag-remover ### Model URL : https://huggingface.co/super-health-skin-tag-remover/super-health-skin-tag-remover ### Model Description : Failed to access https://huggingface.co/super-health-skin-tag-remover/super-health-skin-tag-remover - HTTP Status Code: 404
jikaixuan/zephyr-7b-dpo-full
https://huggingface.co/jikaixuan/zephyr-7b-dpo-full
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jikaixuan/zephyr-7b-dpo-full ### Model URL : https://huggingface.co/jikaixuan/zephyr-7b-dpo-full ### Model Description : No model card New: Create and edit this model card directly on the website!
tvjoseph/ABSA2
https://huggingface.co/tvjoseph/ABSA2
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tvjoseph/ABSA2 ### Model URL : https://huggingface.co/tvjoseph/ABSA2 ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
SamirXR/Nyx-7.1
https://huggingface.co/SamirXR/Nyx-7.1
Failed to access https://huggingface.co/SamirXR/Nyx-7.1 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : SamirXR/Nyx-7.1 ### Model URL : https://huggingface.co/SamirXR/Nyx-7.1 ### Model Description : Failed to access https://huggingface.co/SamirXR/Nyx-7.1 - HTTP Status Code: 404
Turtle344/results
https://huggingface.co/Turtle344/results
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Turtle344/results ### Model URL : https://huggingface.co/Turtle344/results ### Model Description : No model card New: Create and edit this model card directly on the website!
Hemg/housegradim
https://huggingface.co/Hemg/housegradim
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Hemg/housegradim ### Model URL : https://huggingface.co/Hemg/housegradim ### Model Description : No model card New: Create and edit this model card directly on the website!
jiyonghug/dash_nlp_bert_new_share_0207
https://huggingface.co/jiyonghug/dash_nlp_bert_new_share_0207
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jiyonghug/dash_nlp_bert_new_share_0207 ### Model URL : https://huggingface.co/jiyonghug/dash_nlp_bert_new_share_0207 ### Model Description : No model card New: Create and edit this model card directly on the website!
blitzerrr/nestech-news-llm-dataset
https://huggingface.co/blitzerrr/nestech-news-llm-dataset
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : blitzerrr/nestech-news-llm-dataset ### Model URL : https://huggingface.co/blitzerrr/nestech-news-llm-dataset ### Model Description : No model card New: Create and edit this model card directly on the website!
karawalla/ship-ai-v2_peft
https://huggingface.co/karawalla/ship-ai-v2_peft
Failed to access https://huggingface.co/karawalla/ship-ai-v2_peft - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : karawalla/ship-ai-v2_peft ### Model URL : https://huggingface.co/karawalla/ship-ai-v2_peft ### Model Description : Failed to access https://huggingface.co/karawalla/ship-ai-v2_peft - HTTP Status Code: 404
karawalla/ship-ai-v2_release
https://huggingface.co/karawalla/ship-ai-v2_release
Failed to access https://huggingface.co/karawalla/ship-ai-v2_release - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : karawalla/ship-ai-v2_release ### Model URL : https://huggingface.co/karawalla/ship-ai-v2_release ### Model Description : Failed to access https://huggingface.co/karawalla/ship-ai-v2_release - HTTP Status Code: 404
lombardata/provazioboiadinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze
https://huggingface.co/lombardata/provazioboiadinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze
Failed to access https://huggingface.co/lombardata/provazioboiadinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : lombardata/provazioboiadinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze ### Model URL : https://huggingface.co/lombardata/provazioboiadinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze ### Model Description : Failed to access https://huggingface.co/lombardata/provazioboiadinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze - HTTP Status Code: 404
obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSNmix5k
https://huggingface.co/obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSNmix5k
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSNmix5k ### Model URL : https://huggingface.co/obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSNmix5k ### Model Description : No model card New: Create and edit this model card directly on the website!
obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSN
https://huggingface.co/obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSN
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSN ### Model URL : https://huggingface.co/obrmmk/tinycodellama-jp-0.6b-30k-v2-instruct-CSN ### Model Description : No model card New: Create and edit this model card directly on the website!
logeeshanv/Llama-2-7b-chat-hf-sharded-bf16-5GB-fine-tuned-adapters
https://huggingface.co/logeeshanv/Llama-2-7b-chat-hf-sharded-bf16-5GB-fine-tuned-adapters
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : logeeshanv/Llama-2-7b-chat-hf-sharded-bf16-5GB-fine-tuned-adapters ### Model URL : https://huggingface.co/logeeshanv/Llama-2-7b-chat-hf-sharded-bf16-5GB-fine-tuned-adapters ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
superfriends/captainplanet-v5
https://huggingface.co/superfriends/captainplanet-v5
Failed to access https://huggingface.co/superfriends/captainplanet-v5 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : superfriends/captainplanet-v5 ### Model URL : https://huggingface.co/superfriends/captainplanet-v5 ### Model Description : Failed to access https://huggingface.co/superfriends/captainplanet-v5 - HTTP Status Code: 404
danaleee/dog
https://huggingface.co/danaleee/dog
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danaleee/dog ### Model URL : https://huggingface.co/danaleee/dog ### Model Description : These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
shnl/llama2-7b-vimmrc1.0
https://huggingface.co/shnl/llama2-7b-vimmrc1.0
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-7b-vimmrc1.0 ### Model URL : https://huggingface.co/shnl/llama2-7b-vimmrc1.0 ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
shnl/llama2-13b-vimmrc1.0
https://huggingface.co/shnl/llama2-13b-vimmrc1.0
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-13b-vimmrc1.0 ### Model URL : https://huggingface.co/shnl/llama2-13b-vimmrc1.0 ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Fredh99/finetune_mistral_7b_v0.2_action_selection
https://huggingface.co/Fredh99/finetune_mistral_7b_v0.2_action_selection
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Fredh99/finetune_mistral_7b_v0.2_action_selection ### Model URL : https://huggingface.co/Fredh99/finetune_mistral_7b_v0.2_action_selection ### Model Description : No model card New: Create and edit this model card directly on the website!
lombardata/provazioboia2dinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze
https://huggingface.co/lombardata/provazioboia2dinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze
Failed to access https://huggingface.co/lombardata/provazioboia2dinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : lombardata/provazioboia2dinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze ### Model URL : https://huggingface.co/lombardata/provazioboia2dinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze ### Model Description : Failed to access https://huggingface.co/lombardata/provazioboia2dinov2-large-2024_02_07-with_data_aug_batch-size32_epochs1_freeze - HTTP Status Code: 404
saraswathi01/a2c
https://huggingface.co/saraswathi01/a2c
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : saraswathi01/a2c ### Model URL : https://huggingface.co/saraswathi01/a2c ### Model Description : No model card New: Create and edit this model card directly on the website!
shnl/llama2-7b-vimmrc2.0
https://huggingface.co/shnl/llama2-7b-vimmrc2.0
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-7b-vimmrc2.0 ### Model URL : https://huggingface.co/shnl/llama2-7b-vimmrc2.0 ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
OpenSafetyLab/MD-Judge-v0.1
https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1
MD-Judge is a LLM-based safetyguard, fine-tund on top of Mistral-7B. MD-Judge serves as a classifier to evaluate the safety of QA pairs. MD-Judge was born to study the safety of different LLMs serving as an general evaluation tool, which is proposed under the SALAD-Bench paper Please refer to our Github for more using examples
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : OpenSafetyLab/MD-Judge-v0.1 ### Model URL : https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1 ### Model Description : MD-Judge is a LLM-based safetyguard, fine-tund on top of Mistral-7B. MD-Judge serves as a classifier to evaluate the safety of QA pairs. MD-Judge was born to study the safety of different LLMs serving as an general evaluation tool, which is proposed under the SALAD-Bench paper Please refer to our Github for more using examples
yeye776/OndeviceAI-large
https://huggingface.co/yeye776/OndeviceAI-large
This model is a fine-tuned version of paust/pko-t5-large on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : yeye776/OndeviceAI-large ### Model URL : https://huggingface.co/yeye776/OndeviceAI-large ### Model Description : This model is a fine-tuned version of paust/pko-t5-large on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
venustar1228/5FxE7qPLWH8o9wzjh5cTcLu1e64AFXE9KLecrSjYQnp5VYWe_vgg
https://huggingface.co/venustar1228/5FxE7qPLWH8o9wzjh5cTcLu1e64AFXE9KLecrSjYQnp5VYWe_vgg
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : venustar1228/5FxE7qPLWH8o9wzjh5cTcLu1e64AFXE9KLecrSjYQnp5VYWe_vgg ### Model URL : https://huggingface.co/venustar1228/5FxE7qPLWH8o9wzjh5cTcLu1e64AFXE9KLecrSjYQnp5VYWe_vgg ### Model Description : No model card New: Create and edit this model card directly on the website!
shnl/llama2-13b-vimmrc2.0
https://huggingface.co/shnl/llama2-13b-vimmrc2.0
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-13b-vimmrc2.0 ### Model URL : https://huggingface.co/shnl/llama2-13b-vimmrc2.0 ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
sc1122/ship-ai-v1_release
https://huggingface.co/sc1122/ship-ai-v1_release
Failed to access https://huggingface.co/sc1122/ship-ai-v1_release - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : sc1122/ship-ai-v1_release ### Model URL : https://huggingface.co/sc1122/ship-ai-v1_release ### Model Description : Failed to access https://huggingface.co/sc1122/ship-ai-v1_release - HTTP Status Code: 404
AnujS17/File_Loader_Data_Extractor
https://huggingface.co/AnujS17/File_Loader_Data_Extractor
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AnujS17/File_Loader_Data_Extractor ### Model URL : https://huggingface.co/AnujS17/File_Loader_Data_Extractor ### Model Description : No model card New: Create and edit this model card directly on the website!
onizukal/SMIDS_5x_beit_large_RMSProp_lr00001_fold5
https://huggingface.co/onizukal/SMIDS_5x_beit_large_RMSProp_lr00001_fold5
This model is a fine-tuned version of microsoft/beit-large-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : onizukal/SMIDS_5x_beit_large_RMSProp_lr00001_fold5 ### Model URL : https://huggingface.co/onizukal/SMIDS_5x_beit_large_RMSProp_lr00001_fold5 ### Model Description : This model is a fine-tuned version of microsoft/beit-large-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
danieleon/5CXNMFEj4BTXea7GmYmcZ61SgqzExYUVEEqRnDu2xjSArb86_vgg
https://huggingface.co/danieleon/5CXNMFEj4BTXea7GmYmcZ61SgqzExYUVEEqRnDu2xjSArb86_vgg
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danieleon/5CXNMFEj4BTXea7GmYmcZ61SgqzExYUVEEqRnDu2xjSArb86_vgg ### Model URL : https://huggingface.co/danieleon/5CXNMFEj4BTXea7GmYmcZ61SgqzExYUVEEqRnDu2xjSArb86_vgg ### Model Description : No model card New: Create and edit this model card directly on the website!
venustar1228/5DJVK4D27YRhLXVG1zw3xU4unC4VjvikDUxgyNQxMsHkmusp_vgg
https://huggingface.co/venustar1228/5DJVK4D27YRhLXVG1zw3xU4unC4VjvikDUxgyNQxMsHkmusp_vgg
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : venustar1228/5DJVK4D27YRhLXVG1zw3xU4unC4VjvikDUxgyNQxMsHkmusp_vgg ### Model URL : https://huggingface.co/venustar1228/5DJVK4D27YRhLXVG1zw3xU4unC4VjvikDUxgyNQxMsHkmusp_vgg ### Model Description : No model card New: Create and edit this model card directly on the website!
shnl/llama2-7b-vicoqa
https://huggingface.co/shnl/llama2-7b-vicoqa
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-7b-vicoqa ### Model URL : https://huggingface.co/shnl/llama2-7b-vicoqa ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
shnl/llama2-13b-vicoqa
https://huggingface.co/shnl/llama2-13b-vicoqa
[More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shnl/llama2-13b-vicoqa ### Model URL : https://huggingface.co/shnl/llama2-13b-vicoqa ### Model Description : [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] The following bitsandbytes quantization config was used during training: The following bitsandbytes quantization config was used during training:
Akshit2000/distilgpt2-finetuned-wikitext2
https://huggingface.co/Akshit2000/distilgpt2-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Akshit2000/distilgpt2-finetuned-wikitext2 ### Model URL : https://huggingface.co/Akshit2000/distilgpt2-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
saraswathi01/a2c-PandaPickAndPlace-v3
https://huggingface.co/saraswathi01/a2c-PandaPickAndPlace-v3
This is a trained model of a A2C agent playing PandaPickAndPlace-v3 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : saraswathi01/a2c-PandaPickAndPlace-v3 ### Model URL : https://huggingface.co/saraswathi01/a2c-PandaPickAndPlace-v3 ### Model Description : This is a trained model of a A2C agent playing PandaPickAndPlace-v3 using the stable-baselines3 library. TODO: Add your code
cth127/gpt-xl-sentencebert-generation
https://huggingface.co/cth127/gpt-xl-sentencebert-generation
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : cth127/gpt-xl-sentencebert-generation ### Model URL : https://huggingface.co/cth127/gpt-xl-sentencebert-generation ### Model Description :
Diginsa/Plant-Disease-Detection-Project
https://huggingface.co/Diginsa/Plant-Disease-Detection-Project
This model is a fine-tuned version of google/mobilenet_v2_1.0_224 on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Diginsa/Plant-Disease-Detection-Project ### Model URL : https://huggingface.co/Diginsa/Plant-Disease-Detection-Project ### Model Description : This model is a fine-tuned version of google/mobilenet_v2_1.0_224 on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
magus4450/speecht5_finetuned_voxpopuli_cs
https://huggingface.co/magus4450/speecht5_finetuned_voxpopuli_cs
This model is a fine-tuned version of microsoft/speecht5_tts on the facebook/voxpopuli dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : magus4450/speecht5_finetuned_voxpopuli_cs ### Model URL : https://huggingface.co/magus4450/speecht5_finetuned_voxpopuli_cs ### Model Description : This model is a fine-tuned version of microsoft/speecht5_tts on the facebook/voxpopuli dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
muzammil-eds/tinyllama-3T-64k-JSONExtractor-v3
https://huggingface.co/muzammil-eds/tinyllama-3T-64k-JSONExtractor-v3
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : muzammil-eds/tinyllama-3T-64k-JSONExtractor-v3 ### Model URL : https://huggingface.co/muzammil-eds/tinyllama-3T-64k-JSONExtractor-v3 ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
rombodawg/Everyone-Coder-33b-v2-Base
https://huggingface.co/rombodawg/Everyone-Coder-33b-v2-Base
Everyone-Coder-33b-v2-Base EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base. This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model. Prompt template: Alpaca The models that were used in this merger were as follow: https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B https://huggingface.co/WizardLM/WizardCoder-33B-V1.1 Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗 You can find the write up for merging models here: https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing Config for the merger can be found bellow:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : rombodawg/Everyone-Coder-33b-v2-Base ### Model URL : https://huggingface.co/rombodawg/Everyone-Coder-33b-v2-Base ### Model Description : Everyone-Coder-33b-v2-Base EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base. This Version 2 of the Everything-Coder-33b model uses the task_arithmetic merging method which has major increases in coding performance as opposed to the ties method. You should find this version having much better coding performance than Version 1, without any of the negative that merging has on the integrity of the model. Prompt template: Alpaca The models that were used in this merger were as follow: https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B https://huggingface.co/WizardLM/WizardCoder-33B-V1.1 Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗 You can find the write up for merging models here: https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing Config for the merger can be found bellow:
Pranav-10/Sentiment_analysis
https://huggingface.co/Pranav-10/Sentiment_analysis
This repository hosts a sentiment analysis model fine-tuned on the IMDb movie reviews dataset using DistilBERT architecture. It's designed to classify text inputs into positive or negative sentiment categories. The model is based on the DistilBERT architecture, a smaller, faster, cheaper, and lighter version of BERT. It has been fine-tuned on the IMDb dataset, which consists of 50,000 movie reviews labeled as positive or negative. DistilBERT has been proven to retain most of the performance of BERT while being more efficient. This makes it an excellent choice for sentiment analysis tasks where the model's size and speed are essential. To use the model, you will need to install the transformers library from Hugging Face. You can install it using pip: pip install transformers Once installed, you can use the following code to classify text using this model: from transformers import DistilBertTokenizer, DistilBertForSequenceClassification import torch tokenizer = DistilBertTokenizer.from_pretrained(Pranav-10/Sentimental_Analysis) model = DistilBertForSequenceClassification.from_pretrained(Pranav-10/Sentimental_Analysis) text = "I loved this movie. The performances were fantastic!" inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512) with torch.no_grad(): logits = model(**inputs).logits probabilities = torch.softmax(logits, dim=-1) print(probabilities) Evaluation Results The model achieved the following performance on the IMDb dataset: Accuracy: 90% Precision: 89% Recall: 91% F1 Score: 90% These results indicate the model's high efficiency in classifying sentiments as positive or negative. Training Procedure The model was trained using the following procedure: Pre-processing: The dataset was pre-processed by converting all reviews to lowercase and tokenizing using the DistilBERT tokenizer. Optimization: We used the Adam optimizer with a learning rate of 2e-5, a batch size of 16, and trained the model for 3 epochs. Hardware: Training was performed on a single NVIDIA GTX 1650 GPU.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pranav-10/Sentiment_analysis ### Model URL : https://huggingface.co/Pranav-10/Sentiment_analysis ### Model Description : This repository hosts a sentiment analysis model fine-tuned on the IMDb movie reviews dataset using DistilBERT architecture. It's designed to classify text inputs into positive or negative sentiment categories. The model is based on the DistilBERT architecture, a smaller, faster, cheaper, and lighter version of BERT. It has been fine-tuned on the IMDb dataset, which consists of 50,000 movie reviews labeled as positive or negative. DistilBERT has been proven to retain most of the performance of BERT while being more efficient. This makes it an excellent choice for sentiment analysis tasks where the model's size and speed are essential. To use the model, you will need to install the transformers library from Hugging Face. You can install it using pip: pip install transformers Once installed, you can use the following code to classify text using this model: from transformers import DistilBertTokenizer, DistilBertForSequenceClassification import torch tokenizer = DistilBertTokenizer.from_pretrained(Pranav-10/Sentimental_Analysis) model = DistilBertForSequenceClassification.from_pretrained(Pranav-10/Sentimental_Analysis) text = "I loved this movie. The performances were fantastic!" inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512) with torch.no_grad(): logits = model(**inputs).logits probabilities = torch.softmax(logits, dim=-1) print(probabilities) Evaluation Results The model achieved the following performance on the IMDb dataset: Accuracy: 90% Precision: 89% Recall: 91% F1 Score: 90% These results indicate the model's high efficiency in classifying sentiments as positive or negative. Training Procedure The model was trained using the following procedure: Pre-processing: The dataset was pre-processed by converting all reviews to lowercase and tokenizing using the DistilBERT tokenizer. Optimization: We used the Adam optimizer with a learning rate of 2e-5, a batch size of 16, and trained the model for 3 epochs. Hardware: Training was performed on a single NVIDIA GTX 1650 GPU.
ubaskota/my_mlm_model_masked
https://huggingface.co/ubaskota/my_mlm_model_masked
This model is a fine-tuned version of distilroberta-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ubaskota/my_mlm_model_masked ### Model URL : https://huggingface.co/ubaskota/my_mlm_model_masked ### Model Description : This model is a fine-tuned version of distilroberta-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4
https://huggingface.co/yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4
These are yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. You should use a photo of MDDL man to trigger the image generation. Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 ### Model URL : https://huggingface.co/yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 ### Model Description : These are yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. You should use a photo of MDDL man to trigger the image generation. Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
octadion/phi-2-jagr-ppg-simpkb
https://huggingface.co/octadion/phi-2-jagr-ppg-simpkb
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : octadion/phi-2-jagr-ppg-simpkb ### Model URL : https://huggingface.co/octadion/phi-2-jagr-ppg-simpkb ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2
https://huggingface.co/yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2
These are yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. You should use a photo of MDDL man to trigger the image generation. Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2 ### Model URL : https://huggingface.co/yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2 ### Model Description : These are yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. You should use a photo of MDDL man to trigger the image generation. Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
mesolitica/Qwen1.5-0.5B-4096-fpf
https://huggingface.co/mesolitica/Qwen1.5-0.5B-4096-fpf
README at https://github.com/huseinzol05/malaya/tree/5.1/session/qwen2 WandB, https://wandb.ai/huseinzol05/finetune-Qwen1.5-0.5B?workspace=user-huseinzol05
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : mesolitica/Qwen1.5-0.5B-4096-fpf ### Model URL : https://huggingface.co/mesolitica/Qwen1.5-0.5B-4096-fpf ### Model Description : README at https://github.com/huseinzol05/malaya/tree/5.1/session/qwen2 WandB, https://wandb.ai/huseinzol05/finetune-Qwen1.5-0.5B?workspace=user-huseinzol05
brucethemoose/Yi-34B-200K-RPMerge
https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge
A merge of several Yi 34B models with a singular goal: 40K+ context, instruct-enhanced storytelling. Disappointed with some quirks of my previous kitchen sink merges (like token/instruct formats from various models showing up when they shouldn't), I've gone 'back to the basics' and picked a few Vicuna-format only models: DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 and migtissera/Tess-34B-v1.5b both have excellent general instruction-following performance. cgato/Thespis-34b-v0.7 is trained on the "Username: {Input} / BotName: {Response}" format, to emphasize it in the merge (but not force it). It also seems to work for multi-character stories. Doctor-Shotgun/limarpv3-yi-llama-34b-lora is trained on roleplaying data, but merged at a modest weight to not over emphasize it. This is the only non-vicuna model (being alpaca format), but it doesn't seem to interefere with the Vicuna format or adversely affect long-context perplexity adamo1139/yi-34b-200k-rawrr-dpo-2 the base for the limarp lora, this is base Yi gently finetuned to discourage refusals. migtissera/Tess-M-Creative-v1.0 and NousResearch/Nous-Capybara-34B are both "undertrained" Yi models. I find they excel at raw completion performance (like long novel continuations) while still retaining some Vicuna instruct ability. This may be why some still prefer the original Tess 1.0/Capybara merge. I consider this a more "focused" merge that previous ones. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/ As well as a very explicit system prompt like this: https://old.reddit.com/r/LocalLLaMA/comments/1aiz6zu/roleplaying_system_prompts/koygiwa/ Chinese models with large tokenizer vocabularies like Yi need careful parameter tuning due to their huge logit sampling "tails." Yi in particular also runs relatively "hot" even at lower temperatures. I am a huge fan of Kalomaze's quadratic sampling (shown as "smoothing factor" where available), as described here: https://github.com/oobabooga/text-generation-webui/pull/5403 Otherwise, I recommend a lower temperature with 0.1 or higher MinP, a little repetition penalty, and mirostat with a low tau, and no other samplers. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841 24GB GPUs can efficiently run Yi-34B-200K models at 40K-90K context with exllamav2, and performant UIs like exui. I go into more detail in this post. Empty 16GB GPUs can still run the high context with aggressive quantization. To load/train this in full-context backends like transformers, you must change max_position_embeddings in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends that support flash attention + 8 bit kv cache, like exllamav2, litellm, vllm or unsloth. Thanks to ParasiticRogue for this idea of a Vicuna-only merge, see: https://huggingface.co/brucethemoose/jondurbin_bagel-dpo-34b-v0.2-exl2-4bpw-fiction/discussions See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8#testing-notes This is a possible base for a storytelling finetune/LASER in the future, once I can bite the bullet and rent some A100s or a MI300. I have tested this merge with with novel-style continuation (but not much chat-style roleplay), and some assistant-style responses and long context analysis. I haven't seen any refusals so far. This model was merged using the DARE TIES merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. The following models were included in the merge: The following YAML configuration was used to produce this model: I'm part of a AI startup called Holocene AI! We're new, busy, and still setting things up. But if you have any business inquiries, want a job, or just want some consultation, feel free to shoot me an email. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely none of the nonsense of scammy AI startups. Contact me at: agates.holocene.ai@gmail.com I also set up a Ko-Fi! I want to run some (personal) training/LASERing as well, at 100K context or so. If you'd like to buy me 10 minutes on an A100 (or 5 seconds on an MI300X), I'd appreciate it: https://ko-fi.com/alphaatlas
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : brucethemoose/Yi-34B-200K-RPMerge ### Model URL : https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge ### Model Description : A merge of several Yi 34B models with a singular goal: 40K+ context, instruct-enhanced storytelling. Disappointed with some quirks of my previous kitchen sink merges (like token/instruct formats from various models showing up when they shouldn't), I've gone 'back to the basics' and picked a few Vicuna-format only models: DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 and migtissera/Tess-34B-v1.5b both have excellent general instruction-following performance. cgato/Thespis-34b-v0.7 is trained on the "Username: {Input} / BotName: {Response}" format, to emphasize it in the merge (but not force it). It also seems to work for multi-character stories. Doctor-Shotgun/limarpv3-yi-llama-34b-lora is trained on roleplaying data, but merged at a modest weight to not over emphasize it. This is the only non-vicuna model (being alpaca format), but it doesn't seem to interefere with the Vicuna format or adversely affect long-context perplexity adamo1139/yi-34b-200k-rawrr-dpo-2 the base for the limarp lora, this is base Yi gently finetuned to discourage refusals. migtissera/Tess-M-Creative-v1.0 and NousResearch/Nous-Capybara-34B are both "undertrained" Yi models. I find they excel at raw completion performance (like long novel continuations) while still retaining some Vicuna instruct ability. This may be why some still prefer the original Tess 1.0/Capybara merge. I consider this a more "focused" merge that previous ones. I will investigate other models (perhaps chatML models?) for a more "factual assistant" focused merge, as well as a coding-focused merge if I can't find one to suit my needs. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/ As well as a very explicit system prompt like this: https://old.reddit.com/r/LocalLLaMA/comments/1aiz6zu/roleplaying_system_prompts/koygiwa/ Chinese models with large tokenizer vocabularies like Yi need careful parameter tuning due to their huge logit sampling "tails." Yi in particular also runs relatively "hot" even at lower temperatures. I am a huge fan of Kalomaze's quadratic sampling (shown as "smoothing factor" where available), as described here: https://github.com/oobabooga/text-generation-webui/pull/5403 Otherwise, I recommend a lower temperature with 0.1 or higher MinP, a little repetition penalty, and mirostat with a low tau, and no other samplers. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841 24GB GPUs can efficiently run Yi-34B-200K models at 40K-90K context with exllamav2, and performant UIs like exui. I go into more detail in this post. Empty 16GB GPUs can still run the high context with aggressive quantization. To load/train this in full-context backends like transformers, you must change max_position_embeddings in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends that support flash attention + 8 bit kv cache, like exllamav2, litellm, vllm or unsloth. Thanks to ParasiticRogue for this idea of a Vicuna-only merge, see: https://huggingface.co/brucethemoose/jondurbin_bagel-dpo-34b-v0.2-exl2-4bpw-fiction/discussions See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8#testing-notes This is a possible base for a storytelling finetune/LASER in the future, once I can bite the bullet and rent some A100s or a MI300. I have tested this merge with with novel-style continuation (but not much chat-style roleplay), and some assistant-style responses and long context analysis. I haven't seen any refusals so far. This model was merged using the DARE TIES merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. The following models were included in the merge: The following YAML configuration was used to produce this model: I'm part of a AI startup called Holocene AI! We're new, busy, and still setting things up. But if you have any business inquiries, want a job, or just want some consultation, feel free to shoot me an email. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely none of the nonsense of scammy AI startups. Contact me at: agates.holocene.ai@gmail.com I also set up a Ko-Fi! I want to run some (personal) training/LASERing as well, at 100K context or so. If you'd like to buy me 10 minutes on an A100 (or 5 seconds on an MI300X), I'd appreciate it: https://ko-fi.com/alphaatlas
rajbabuhug/spoof
https://huggingface.co/rajbabuhug/spoof
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : rajbabuhug/spoof ### Model URL : https://huggingface.co/rajbabuhug/spoof ### Model Description :
danaleee/duck
https://huggingface.co/danaleee/duck
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danaleee/duck ### Model URL : https://huggingface.co/danaleee/duck ### Model Description : No model card New: Create and edit this model card directly on the website!
chenhugging/mistral-7b-ocn-v1
https://huggingface.co/chenhugging/mistral-7b-ocn-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the oncc_instruct dataset. The following hyperparameters were used during training: hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-ocn-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : chenhugging/mistral-7b-ocn-v1 ### Model URL : https://huggingface.co/chenhugging/mistral-7b-ocn-v1 ### Model Description : This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the oncc_instruct dataset. The following hyperparameters were used during training: hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-ocn-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
EricValen/ppo-LunarLander-v2
https://huggingface.co/EricValen/ppo-LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EricValen/ppo-LunarLander-v2 ### Model URL : https://huggingface.co/EricValen/ppo-LunarLander-v2 ### Model Description : This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. TODO: Add your code
sungile/sd-virtualstaging-zillow-model-600
https://huggingface.co/sungile/sd-virtualstaging-zillow-model-600
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : sungile/sd-virtualstaging-zillow-model-600 ### Model URL : https://huggingface.co/sungile/sd-virtualstaging-zillow-model-600 ### Model Description : No model card New: Create and edit this model card directly on the website!
code-philia/TrustVis
https://huggingface.co/code-philia/TrustVis
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : code-philia/TrustVis ### Model URL : https://huggingface.co/code-philia/TrustVis ### Model Description :
zuazo/whisper-base-eu-2024-02-07
https://huggingface.co/zuazo/whisper-base-eu-2024-02-07
This model is a fine-tuned version of openai/whisper-base on the mozilla-foundation/common_voice_13_0 eu dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : zuazo/whisper-base-eu-2024-02-07 ### Model URL : https://huggingface.co/zuazo/whisper-base-eu-2024-02-07 ### Model Description : This model is a fine-tuned version of openai/whisper-base on the mozilla-foundation/common_voice_13_0 eu dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
brucethemoose/Yi-34B-200K-RPMerge-exl2-3.1bpw
https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge-exl2-3.1bpw
See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge Quantized with default exl2 quantization, still investigating the benefits/drawbacks of long context (32K) quantization. This model was merged using the DARE TIES merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. The following models were included in the merge: The following YAML configuration was used to produce this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : brucethemoose/Yi-34B-200K-RPMerge-exl2-3.1bpw ### Model URL : https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge-exl2-3.1bpw ### Model Description : See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge Quantized with default exl2 quantization, still investigating the benefits/drawbacks of long context (32K) quantization. This model was merged using the DARE TIES merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. The following models were included in the merge: The following YAML configuration was used to produce this model:
brucethemoose/Yi-34B-200K-RPMerge-exl2-4.0bpw
https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge-exl2-4.0bpw
See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge Quantized with default exl2 quantization, still investigating the benefits/drawbacks of long context (32K) quantization. This model was merged using the DARE TIES merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. The following models were included in the merge: The following YAML configuration was used to produce this model:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : brucethemoose/Yi-34B-200K-RPMerge-exl2-4.0bpw ### Model URL : https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge-exl2-4.0bpw ### Model Description : See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-RPMerge Quantized with default exl2 quantization, still investigating the benefits/drawbacks of long context (32K) quantization. This model was merged using the DARE TIES merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. The following models were included in the merge: The following YAML configuration was used to produce this model:
Bajiyo/whisper-small-ml
https://huggingface.co/Bajiyo/whisper-small-ml
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Bajiyo/whisper-small-ml ### Model URL : https://huggingface.co/Bajiyo/whisper-small-ml ### Model Description : No model card New: Create and edit this model card directly on the website!
sc1122/ship-ai-v2_release
https://huggingface.co/sc1122/ship-ai-v2_release
Failed to access https://huggingface.co/sc1122/ship-ai-v2_release - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : sc1122/ship-ai-v2_release ### Model URL : https://huggingface.co/sc1122/ship-ai-v2_release ### Model Description : Failed to access https://huggingface.co/sc1122/ship-ai-v2_release - HTTP Status Code: 404
danaleee/CL
https://huggingface.co/danaleee/CL
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : danaleee/CL ### Model URL : https://huggingface.co/danaleee/CL ### Model Description : These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using DreamBooth. You can find some example images in the following. LoRA for the text encoder was enabled: False.
lizadolitl/dr-oz-bioheal-cbd-gummies
https://huggingface.co/lizadolitl/dr-oz-bioheal-cbd-gummies
Failed to access https://huggingface.co/lizadolitl/dr-oz-bioheal-cbd-gummies - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : lizadolitl/dr-oz-bioheal-cbd-gummies ### Model URL : https://huggingface.co/lizadolitl/dr-oz-bioheal-cbd-gummies ### Model Description : Failed to access https://huggingface.co/lizadolitl/dr-oz-bioheal-cbd-gummies - HTTP Status Code: 404
shivi23/lwf-07
https://huggingface.co/shivi23/lwf-07
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : shivi23/lwf-07 ### Model URL : https://huggingface.co/shivi23/lwf-07 ### Model Description : No model card New: Create and edit this model card directly on the website!
Stereotyp1cal/RVC-models
https://huggingface.co/Stereotyp1cal/RVC-models
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Stereotyp1cal/RVC-models ### Model URL : https://huggingface.co/Stereotyp1cal/RVC-models ### Model Description :
Zarakun/whisper_asr_1.1
https://huggingface.co/Zarakun/whisper_asr_1.1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Zarakun/whisper_asr_1.1 ### Model URL : https://huggingface.co/Zarakun/whisper_asr_1.1 ### Model Description : No model card New: Create and edit this model card directly on the website!
weijie210/mistral_gsm8k_sft
https://huggingface.co/weijie210/mistral_gsm8k_sft
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : weijie210/mistral_gsm8k_sft ### Model URL : https://huggingface.co/weijie210/mistral_gsm8k_sft ### Model Description : This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
sdfqweqwedaf/perfect-pussy
https://huggingface.co/sdfqweqwedaf/perfect-pussy
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : sdfqweqwedaf/perfect-pussy ### Model URL : https://huggingface.co/sdfqweqwedaf/perfect-pussy ### Model Description : No model card New: Create and edit this model card directly on the website!
RikeshSilwal/wav2vec2-ekg-nepali-magh17-5
https://huggingface.co/RikeshSilwal/wav2vec2-ekg-nepali-magh17-5
Failed to access https://huggingface.co/RikeshSilwal/wav2vec2-ekg-nepali-magh17-5 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : RikeshSilwal/wav2vec2-ekg-nepali-magh17-5 ### Model URL : https://huggingface.co/RikeshSilwal/wav2vec2-ekg-nepali-magh17-5 ### Model Description : Failed to access https://huggingface.co/RikeshSilwal/wav2vec2-ekg-nepali-magh17-5 - HTTP Status Code: 404
tndklab/wav2vec_RTSplit0207_4
https://huggingface.co/tndklab/wav2vec_RTSplit0207_4
This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-japanese on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : tndklab/wav2vec_RTSplit0207_4 ### Model URL : https://huggingface.co/tndklab/wav2vec_RTSplit0207_4 ### Model Description : This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-japanese on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
asadmasad/output-6.7b-26k-ds-test
https://huggingface.co/asadmasad/output-6.7b-26k-ds-test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : asadmasad/output-6.7b-26k-ds-test ### Model URL : https://huggingface.co/asadmasad/output-6.7b-26k-ds-test ### Model Description : No model card New: Create and edit this model card directly on the website!
yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2
https://huggingface.co/yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2
These are yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. You should use a photo of MDDL man to trigger the image generation. Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2 ### Model URL : https://huggingface.co/yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2 ### Model Description : These are yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. You should use a photo of MDDL man to trigger the image generation. Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
sarpba/whisper-tiny-hu-normalised
https://huggingface.co/sarpba/whisper-tiny-hu-normalised
Failed to access https://huggingface.co/sarpba/whisper-tiny-hu-normalised - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : sarpba/whisper-tiny-hu-normalised ### Model URL : https://huggingface.co/sarpba/whisper-tiny-hu-normalised ### Model Description : Failed to access https://huggingface.co/sarpba/whisper-tiny-hu-normalised - HTTP Status Code: 404
romil9/rvctraintest
https://huggingface.co/romil9/rvctraintest
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : romil9/rvctraintest ### Model URL : https://huggingface.co/romil9/rvctraintest ### Model Description :
BarBarickoza/RPMix-V2-GGUF
https://huggingface.co/BarBarickoza/RPMix-V2-GGUF
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BarBarickoza/RPMix-V2-GGUF ### Model URL : https://huggingface.co/BarBarickoza/RPMix-V2-GGUF ### Model Description : No model card New: Create and edit this model card directly on the website!
globaldesigners/srk
https://huggingface.co/globaldesigners/srk
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : globaldesigners/srk ### Model URL : https://huggingface.co/globaldesigners/srk ### Model Description :
inhee/SOLAR-10.7B-v1.0-ko
https://huggingface.co/inhee/SOLAR-10.7B-v1.0-ko
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : inhee/SOLAR-10.7B-v1.0-ko ### Model URL : https://huggingface.co/inhee/SOLAR-10.7B-v1.0-ko ### Model Description : No model card New: Create and edit this model card directly on the website!
openbmb/MiniCPM-2B-sft-fp32-llama-format
https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32-llama-format
MiniCPM 技术报告 Technical Report | OmniLMM 多模态模型 Multi-modal Model | CPM-C 千亿模型试用 ~100B Model Trial MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism. 详细的评测结果位于github仓库 Detailed evaluation results are in github repo 注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。 Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now. 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进; 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息; 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果; 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。 Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model. To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models. Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts. Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability. 本仓库中代码依照 Apache-2.0 协议开源 MiniCPM 模型权重的使用则需要遵循 “通用模型许可协议-来源说明-宣传限制-商业授权”。 MiniCPM 模型权重对学术研究完全开放。 如需将模型用于商业用途,请联系cpm@modelbest.cn来获取书面授权,在登记后亦允许免费商业使用。 This repository is released under the Apache-2.0 License. The usage of MiniCPM model weights must strictly follow the General Model License (GML). The models and weights of MiniCPM are completely free for academic research. If you intend to utilize the model for commercial purposes, please reach out to cpm@modelbest.cn to obtain the certificate of authorization. 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 As a language model, MiniCPM generates content by learning from a vast amount of text. However, it does not possess the ability to comprehend or express personal opinions or value judgments. Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : openbmb/MiniCPM-2B-sft-fp32-llama-format ### Model URL : https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32-llama-format ### Model Description : MiniCPM 技术报告 Technical Report | OmniLMM 多模态模型 Multi-modal Model | CPM-C 千亿模型试用 ~100B Model Trial MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism. 详细的评测结果位于github仓库 Detailed evaluation results are in github repo 注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。 Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now. 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进; 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息; 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果; 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。 Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model. To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models. Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts. Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability. 本仓库中代码依照 Apache-2.0 协议开源 MiniCPM 模型权重的使用则需要遵循 “通用模型许可协议-来源说明-宣传限制-商业授权”。 MiniCPM 模型权重对学术研究完全开放。 如需将模型用于商业用途,请联系cpm@modelbest.cn来获取书面授权,在登记后亦允许免费商业使用。 This repository is released under the Apache-2.0 License. The usage of MiniCPM model weights must strictly follow the General Model License (GML). The models and weights of MiniCPM are completely free for academic research. If you intend to utilize the model for commercial purposes, please reach out to cpm@modelbest.cn to obtain the certificate of authorization. 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 As a language model, MiniCPM generates content by learning from a vast amount of text. However, it does not possess the ability to comprehend or express personal opinions or value judgments. Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
jyoung105/add-detail-xl
https://huggingface.co/jyoung105/add-detail-xl
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jyoung105/add-detail-xl ### Model URL : https://huggingface.co/jyoung105/add-detail-xl ### Model Description : No model card New: Create and edit this model card directly on the website!
jyoung105/noise-offset-xl
https://huggingface.co/jyoung105/noise-offset-xl
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jyoung105/noise-offset-xl ### Model URL : https://huggingface.co/jyoung105/noise-offset-xl ### Model Description : No model card New: Create and edit this model card directly on the website!
LoneStriker/Senku-70B-Full-GGUF
https://huggingface.co/LoneStriker/Senku-70B-Full-GGUF
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : LoneStriker/Senku-70B-Full-GGUF ### Model URL : https://huggingface.co/LoneStriker/Senku-70B-Full-GGUF ### Model Description : Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
anjith672/gate-boy2
https://huggingface.co/anjith672/gate-boy2
Text encoder was trained.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : anjith672/gate-boy2 ### Model URL : https://huggingface.co/anjith672/gate-boy2 ### Model Description : Text encoder was trained.
rushidesh/mistral_b_finance_finetuned_test
https://huggingface.co/rushidesh/mistral_b_finance_finetuned_test
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : rushidesh/mistral_b_finance_finetuned_test ### Model URL : https://huggingface.co/rushidesh/mistral_b_finance_finetuned_test ### Model Description : This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Use the code below to get started with the model. [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] BibTeX: [More Information Needed] APA: [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed] [More Information Needed]
jyoung105/lcm-turbo-mix-eulera
https://huggingface.co/jyoung105/lcm-turbo-mix-eulera
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jyoung105/lcm-turbo-mix-eulera ### Model URL : https://huggingface.co/jyoung105/lcm-turbo-mix-eulera ### Model Description : No model card New: Create and edit this model card directly on the website!
Victorfu830717/phi2-finetunedonviggodataset
https://huggingface.co/Victorfu830717/phi2-finetunedonviggodataset
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Victorfu830717/phi2-finetunedonviggodataset ### Model URL : https://huggingface.co/Victorfu830717/phi2-finetunedonviggodataset ### Model Description : No model card New: Create and edit this model card directly on the website!
Schnatz65/distilbert-base-uncased-finetuned-emotion
https://huggingface.co/Schnatz65/distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Schnatz65/distilbert-base-uncased-finetuned-emotion ### Model URL : https://huggingface.co/Schnatz65/distilbert-base-uncased-finetuned-emotion ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
jyoung105/negative-xl
https://huggingface.co/jyoung105/negative-xl
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : jyoung105/negative-xl ### Model URL : https://huggingface.co/jyoung105/negative-xl ### Model Description : No model card New: Create and edit this model card directly on the website!
ertew/ertew
https://huggingface.co/ertew/ertew
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ertew/ertew ### Model URL : https://huggingface.co/ertew/ertew ### Model Description : No model card New: Create and edit this model card directly on the website!
brainventures/deplot_kr
https://huggingface.co/brainventures/deplot_kr
deplot_kr is a Image-to-Data(Text) model based on the google's pix2struct architecture. It was fine-tuned from DePlot, using korean chart image-text pairs. deplot_kr은 google의 pix2struct 구조를 기반으로 한 한국어 image-to-data(텍스트 형태의 데이터 테이블) 모델입니다. DePlot 모델을 한국어 차트 이미지-텍스트 쌍 데이터세트(30만 개)를 이용하여 fine-tuning 했습니다. You can run a prediction by input an image.Model predict the data table of text form in the image. 이미지를 모델에 입력하면 모델은 이미지로부터 표 형태의 데이터 테이블을 예측합니다. Model Input Image Model Output - Prediction 대상:제목: 2011-2021 보건복지 분야 일자리의 증유형: 단일형 일반 세로 대형| 보건(천 명) | 복지(천 명)1분위 | 29.7 | 178.42분위 | 70.8 | 97.33분위 | 86.4 | 61.34분위 | 28.2 | 16.05분위 | 52.3 | 0.9 According to Liu et al.(2023)... The model was trained in a TPU environment. This model achieves the following results: For questions and comments, please use the discussion tab or email gloria@brainventur.com
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : brainventures/deplot_kr ### Model URL : https://huggingface.co/brainventures/deplot_kr ### Model Description : deplot_kr is a Image-to-Data(Text) model based on the google's pix2struct architecture. It was fine-tuned from DePlot, using korean chart image-text pairs. deplot_kr은 google의 pix2struct 구조를 기반으로 한 한국어 image-to-data(텍스트 형태의 데이터 테이블) 모델입니다. DePlot 모델을 한국어 차트 이미지-텍스트 쌍 데이터세트(30만 개)를 이용하여 fine-tuning 했습니다. You can run a prediction by input an image.Model predict the data table of text form in the image. 이미지를 모델에 입력하면 모델은 이미지로부터 표 형태의 데이터 테이블을 예측합니다. Model Input Image Model Output - Prediction 대상:제목: 2011-2021 보건복지 분야 일자리의 증유형: 단일형 일반 세로 대형| 보건(천 명) | 복지(천 명)1분위 | 29.7 | 178.42분위 | 70.8 | 97.33분위 | 86.4 | 61.34분위 | 28.2 | 16.05분위 | 52.3 | 0.9 According to Liu et al.(2023)... The model was trained in a TPU environment. This model achieves the following results: For questions and comments, please use the discussion tab or email gloria@brainventur.com
kxx-kkk/FYP_deberta-v3-base_squadv2
https://huggingface.co/kxx-kkk/FYP_deberta-v3-base_squadv2
Failed to access https://huggingface.co/kxx-kkk/FYP_deberta-v3-base_squadv2 - HTTP Status Code: 404
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : kxx-kkk/FYP_deberta-v3-base_squadv2 ### Model URL : https://huggingface.co/kxx-kkk/FYP_deberta-v3-base_squadv2 ### Model Description : Failed to access https://huggingface.co/kxx-kkk/FYP_deberta-v3-base_squadv2 - HTTP Status Code: 404
Gigacat/AliBey
https://huggingface.co/Gigacat/AliBey
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Gigacat/AliBey ### Model URL : https://huggingface.co/Gigacat/AliBey ### Model Description :