Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ license: cc-by-sa-4.0
|
|
9 |
|
10 |
**slim-summary-tool** is a 4_K_M quantized GGUF version of slim-summary, providing a small, fast inference implementation, to provide high-quality summarizations of complex business documents, on a small, specialized locally-deployable model with summary output structured as a python list of key points.
|
11 |
|
12 |
-
The size of the self-contained GGUF model binary is 1.71 GB, which is small enough to run locally on a CPU with reasonable inference speed, and has been
|
13 |
|
14 |
The model takes as input a text passage, an optional parameter with a focusing phrase or query, and an experimental optional (N) parameter, which is used to guide the model to a specific number of items return in a summary list.
|
15 |
|
|
|
9 |
|
10 |
**slim-summary-tool** is a 4_K_M quantized GGUF version of slim-summary, providing a small, fast inference implementation, to provide high-quality summarizations of complex business documents, on a small, specialized locally-deployable model with summary output structured as a python list of key points.
|
11 |
|
12 |
+
The size of the self-contained GGUF model binary is 1.71 GB, which is small enough to run locally on a CPU with reasonable inference speed, and has been designed to balance high-quality with the ability to deploy on a local machine.
|
13 |
|
14 |
The model takes as input a text passage, an optional parameter with a focusing phrase or query, and an experimental optional (N) parameter, which is used to guide the model to a specific number of items return in a summary list.
|
15 |
|