xxx777xxxASD commited on
Commit
7839dc1
1 Parent(s): 37be33b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -11,6 +11,8 @@ language:
11
 
12
  Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
13
 
 
 
14
  Please feedback me if it's more stable than the [previous version](https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B)
15
 
16
  ### Llama 3 ChaoticSoliloquy-v2-4x8B test
 
11
 
12
  Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
13
 
14
+ The model has totally 25B parameters, of which ~13B are active.
15
+
16
  Please feedback me if it's more stable than the [previous version](https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B)
17
 
18
  ### Llama 3 ChaoticSoliloquy-v2-4x8B test