andreaskoepf
commited on
Commit
•
31f3350
1
Parent(s):
0319e91
update info regarding inference via tgi
Browse files
README.md
CHANGED
@@ -92,12 +92,13 @@ perform safety testing and tuning tailored to their specific applications of the
|
|
92 |
|
93 |
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
94 |
|
95 |
-
## Note regarding inference with TGI
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
|
|
|
|
101 |
|
102 |
## Configuration Details
|
103 |
|
|
|
92 |
|
93 |
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
94 |
|
|
|
95 |
|
96 |
+
## Inference via TGI
|
97 |
+
|
98 |
+
An early version of this model had an embedding count of 32,007 which was incompatible to sharding with [TGI](https://github.com/huggingface/text-generation-inference).
|
99 |
+
In the current version the embeddings and the lm_head weights have been padded to a multiple of 128 (by replicating the emembeddings of the unk-token (id: 0)).
|
100 |
+
Sharded inference with TGI should now work as expected.
|
101 |
+
|
102 |
|
103 |
## Configuration Details
|
104 |
|