mradermacher commited on
Commit
7b76aee
·
verified ·
1 Parent(s): b3ff3d8

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -6,6 +6,8 @@ language:
6
  - en
7
  library_name: transformers
8
  license: apache-2.0
 
 
9
  quantized_by: mradermacher
10
  tags:
11
  - fluently-lm
@@ -32,6 +34,9 @@ tags:
32
  static quants of https://huggingface.co/fluently-sets/FalconThink3-10B-IT
33
 
34
  <!-- provided-files -->
 
 
 
35
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/FalconThink3-10B-IT-i1-GGUF
36
  ## Usage
37
 
@@ -75,6 +80,6 @@ questions you might have and/or if you want some other model quantized.
75
 
76
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
77
  me use its servers and providing upgrades to my workstation to enable
78
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
79
 
80
  <!-- end -->
 
6
  - en
7
  library_name: transformers
8
  license: apache-2.0
9
+ mradermacher:
10
+ readme_rev: 1
11
  quantized_by: mradermacher
12
  tags:
13
  - fluently-lm
 
34
  static quants of https://huggingface.co/fluently-sets/FalconThink3-10B-IT
35
 
36
  <!-- provided-files -->
37
+
38
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#FalconThink3-10B-IT-GGUF).***
39
+
40
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/FalconThink3-10B-IT-i1-GGUF
41
  ## Usage
42
 
 
80
 
81
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
82
  me use its servers and providing upgrades to my workstation to enable
83
+ this work in my free time.
84
 
85
  <!-- end -->