RichardErkhov commited on
Commit
bc56213
1 Parent(s): 15b4553

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ HelpingAI-Lite-2x1B - AWQ
11
+ - Model creator: https://huggingface.co/OEvortex/
12
+ - Original model: https://huggingface.co/OEvortex/HelpingAI-Lite-2x1B/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - en
21
+ metrics:
22
+ - accuracy
23
+ library_name: transformers
24
+ base_model: OEvortex/HelpingAI-Lite
25
+ tags:
26
+ - HelpingAI
27
+ - coder
28
+ - lite
29
+ - Fine-tuned
30
+ - moe
31
+ - nlp
32
+ license: other
33
+ license_name: hsul
34
+ license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
35
+
36
+ ---
37
+
38
+ # HelpingAI-Lite
39
+ # Subscribe to my YouTube channel
40
+ [Subscribe](https://youtube.com/@OEvortex)
41
+
42
+ The HelpingAI-Lite-2x1B is a MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.
43
+
44
+ ## Language
45
+
46
+ The model supports English language.
47
+
48
+
49
+