Update README.md
Browse files
README.md
CHANGED
@@ -35,9 +35,9 @@ df
|
|
35 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/eAINKZ8HvQuCHI7WxD-HQ.png)
|
36 |
|
37 |
## Semantic Search
|
38 |
-
|
39 |
After loading the dataset, use the column `embeddings` for semantic search in this way. See the Jupyter notebook for the full processing script.
|
40 |
-
You can re-run it on consumer-grade hardware without GPU. Inferencing took `Wall time: 1min 36s` on an M3 Max.
|
41 |
|
42 |
```python
|
43 |
from model2vec import StaticModel
|
|
|
35 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/eAINKZ8HvQuCHI7WxD-HQ.png)
|
36 |
|
37 |
## Semantic Search
|
38 |
+
Testwise, the first 512 tokens of every text have been inferenced with the model2vec library and https://huggingface.co/minishlab/M2V_base_output model from @minishlab.
|
39 |
After loading the dataset, use the column `embeddings` for semantic search in this way. See the Jupyter notebook for the full processing script.
|
40 |
+
You can re-run it on consumer-grade hardware without GPU. Inferencing took `Wall time: 1min 36s` on an M3 Max. Inferecing the entire text takes 50 mins but yields poor quality, currently investigating.
|
41 |
|
42 |
```python
|
43 |
from model2vec import StaticModel
|