rmdhirr commited on
Commit
4f33ca6
1 Parent(s): bebfd38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -22,12 +22,11 @@ license: apache-2.0
22
 
23
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
24
 
25
- ## Merge Details
26
- ### Merge Method
 
27
 
28
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) as a base.
29
-
30
- ### Models Merged
31
 
32
  The following models were included in the merge:
33
  * [NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
@@ -35,7 +34,7 @@ The following models were included in the merge:
35
  * [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated) + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3)
36
  * [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
37
 
38
- ### Configuration
39
 
40
  The following YAML configuration was used to produce this model:
41
 
 
22
 
23
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
24
 
25
+ ## Quantizations
26
+ ### GGUF
27
+ - [Q8_0](https://huggingface.co/dasChronos1/Gluon-8B-Q8_0-GGUF)
28
 
29
+ ## Models Merged
 
 
30
 
31
  The following models were included in the merge:
32
  * [NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
 
34
  * [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated) + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3)
35
  * [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
36
 
37
+ ## Configuration
38
 
39
  The following YAML configuration was used to produce this model:
40