|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
HelpingAI-Lite-2x1B - AWQ |
|
- Model creator: https://huggingface.co/OEvortex/ |
|
- Original model: https://huggingface.co/OEvortex/HelpingAI-Lite-2x1B/ |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
base_model: OEvortex/HelpingAI-Lite |
|
tags: |
|
- HelpingAI |
|
- coder |
|
- lite |
|
- Fine-tuned |
|
- moe |
|
- nlp |
|
license: other |
|
license_name: hsul |
|
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md |
|
|
|
--- |
|
|
|
# HelpingAI-Lite |
|
# Subscribe to my YouTube channel |
|
[Subscribe](https://youtube.com/@OEvortex) |
|
|
|
The HelpingAI-Lite-2x1B is a MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time. |
|
|
|
## Language |
|
|
|
The model supports English language. |
|
|
|
|
|
|
|
|