Transformers
GGUF
mistral
text-generation-inference
unsloth
trl
chemistry
biology
legal
art
music
finance
code
medical
Merge
climate
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
Inference Endpoints
language: | |
- en | |
- sw | |
- ig | |
- so | |
- es | |
- ca | |
license: apache-2.0 | |
metrics: | |
- accuracy | |
- bertscore | |
- bleu | |
- brier_score | |
- cer | |
- character | |
- charcut_mt | |
- chrf | |
- code_eval | |
tags: | |
- text-generation-inference | |
- transformers | |
- unsloth | |
- mistral | |
- trl | |
- chemistry | |
- biology | |
- legal | |
- art | |
- music | |
- finance | |
- code | |
- medical | |
- merge | |
- climate | |
- chain-of-thought | |
- tree-of-knowledge | |
- forest-of-thoughts | |
- visual-spacial-sketchpad | |
- alpha-mind | |
- knowledge-graph | |
- entity-detection | |
- encyclopedia | |
- wikipedia | |
- stack-exchange | |
- Cyber-series | |
- MegaMind | |
- Cybertron | |
- SpydazWeb | |
- Spydaz | |
- LCARS | |
- star-trek | |
- mega-transformers | |
- Mulit-Mega-Merge | |
- Multi-Lingual | |
- Afro-Centric | |
- African-Model | |
- Ancient-One | |
datasets: | |
- gretelai/synthetic_text_to_sql | |
- HuggingFaceTB/cosmopedia | |
- teknium/OpenHermes-2.5 | |
- Open-Orca/SlimOrca | |
- Open-Orca/OpenOrca | |
- cognitivecomputations/dolphin-coder | |
- databricks/databricks-dolly-15k | |
- yahma/alpaca-cleaned | |
- uonlp/CulturaX | |
- mwitiderrick/SwahiliPlatypus | |
- swahili | |
- Rogendo/English-Swahili-Sentence-Pairs | |
- ise-uiuc/Magicoder-Evol-Instruct-110K | |
- meta-math/MetaMathQA | |
- abacusai/ARC_DPO_FewShot | |
- abacusai/MetaMath_DPO_FewShot | |
- abacusai/HellaSwag_DPO_FewShot | |
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset | |
- HuggingFaceFW/fineweb | |
- occiglot/occiglot-fineweb-v0.5 | |
- omi-health/medical-dialogue-to-soap-summary | |
- keivalya/MedQuad-MedicalQnADataset | |
- ruslanmv/ai-medical-dataset | |
- Shekswess/medical_llama3_instruct_dataset_short | |
- ShenRuililin/MedicalQnA | |
- virattt/financial-qa-10K | |
- PatronusAI/financebench | |
- takala/financial_phrasebank | |
- Replete-AI/code_bagel | |
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW | |
- IlyaGusev/gpt_roleplay_realm | |
- rickRossie/bluemoon_roleplay_chat_data_300k_messages | |
- jtatman/hypnosis_dataset | |
- Hypersniper/philosophy_dialogue | |
- Locutusque/function-calling-chatml | |
- bible-nlp/biblenlp-corpus | |
- DatadudeDev/Bible | |
- Helsinki-NLP/bible_para | |
- HausaNLP/AfriSenti-Twitter | |
- aixsatoshi/Chat-with-cosmopedia | |
- HuggingFaceTB/cosmopedia-100k | |
- HuggingFaceFW/fineweb-edu | |
- m-a-p/CodeFeedback-Filtered-Instruction | |
- heliosbrahma/mental_health_chatbot_dataset | |
base_model: LeroyDyer/_Spydaz_Web_AI_ | |
# Uploaded model | |
- **Developed by:** Leroy "Spydaz" Dyer | |
- **License:** apache-2.0 | |
- **Finetuned from model :** LeroyDyer/SpydazWebAI_004 | |
[<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/> | |
https://github.com/spydaz | |
* The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. | |
* Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1 | |
* 32k context window (vs 8k context in v0.1) | |
* Rope-theta = 1e6 | |
* No Sliding-Window Attention | |
# Introduction : | |
## SpydazWeb AI model : | |
### Methods: | |
Trained for multi-task operations as well as rag and function calling : | |
This model is a fully functioning model and is fully uncensored: | |
the model has been trained on multiple datasets on the huggingface hub and kaggle : | |
the focus has been mainly on methodology : | |
* Chain of thoughts | |
* steo by step | |
* tree of thoughts | |
* forest of thoughts | |
* graph of thoughts | |
* agent generation : Voting, ranking, ... | |
with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks : | |
the model has been intensivly trained in recalling data previously entered into the matrix: | |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. | |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |