TheBloke commited on
Commit
3f1caef
1 Parent(s): 7125b1c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -2
README.md CHANGED
@@ -5,7 +5,7 @@ datasets:
5
  inference: false
6
  language:
7
  - en
8
- license: mit
9
  model_creator: Ryan Witzman
10
  model_name: Go Bruins
11
  model_type: mistral
@@ -115,7 +115,7 @@ Refer to the Provided Files table below to see what files use which methods, and
115
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
116
  | ---- | ---- | ---- | ---- | ---- | ----- |
117
  | [go-bruins.Q2_K.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
118
- | [go-bruins.Q3_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
119
  | [go-bruins.Q3_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
120
  | [go-bruins.Q3_K_L.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
121
  | [go-bruins.Q4_0.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
@@ -395,6 +395,17 @@ Note: The original MMLU evaluation has been corrected to include 5-shot data rat
395
  For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`.
396
 
397
  ---
 
 
 
 
 
 
 
 
 
 
 
398
 
399
  *This model card was created with care by Ryan Witzman.*
400
 
 
5
  inference: false
6
  language:
7
  - en
8
+ license: cc-by-nc-4.0
9
  model_creator: Ryan Witzman
10
  model_name: Go Bruins
11
  model_type: mistral
 
115
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
116
  | ---- | ---- | ---- | ---- | ---- | ----- |
117
  | [go-bruins.Q2_K.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
118
+ | [go-bruins.Q3_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
119
  | [go-bruins.Q3_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
120
  | [go-bruins.Q3_K_L.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
121
  | [go-bruins.Q4_0.gguf](https://huggingface.co/TheBloke/go-bruins-GGUF/blob/main/go-bruins.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
 
395
  For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`.
396
 
397
  ---
398
+ ## Citations
399
+ ```
400
+ @misc{unacybertron7b,
401
+ title={Cybertron: Uniform Neural Alignment},
402
+ author={Xavier Murias},
403
+ year={2023},
404
+ publisher = {HuggingFace},
405
+ journal = {HuggingFace repository},
406
+ howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16}},
407
+ }
408
+ ```
409
 
410
  *This model card was created with care by Ryan Witzman.*
411