conceptofmind
commited on
Commit
•
5f4a307
1
Parent(s):
a08d83a
Update README.md
Browse files
README.md
CHANGED
@@ -35,9 +35,9 @@ Teraflop AI’s data engine allows for the massively parallel processing of web-
|
|
35 |
|
36 |
We additionally provide bge-base-en-v1.5 embeddings for the first 512 tokens of each state jurisdiction and federal case law as well as the post-processed documents. Mean pooling and normalization were used for the embeddings.
|
37 |
|
38 |
-
We used the Sentence Transformers library maintained by Tom Aarsen of Hugging Face to distribute the embedding process across multiple GPUs.
|
39 |
|
40 |
-
We improved the inference throughput of the embedding process by using Tri Dao’s Flash Attention.
|
41 |
|
42 |
You can read the research paper on the BGE embedding models by Shitao Xiao and Zheng Liu [here](https://arxiv.org/pdf/2309.07597.pdf).
|
43 |
|
|
|
35 |
|
36 |
We additionally provide bge-base-en-v1.5 embeddings for the first 512 tokens of each state jurisdiction and federal case law as well as the post-processed documents. Mean pooling and normalization were used for the embeddings.
|
37 |
|
38 |
+
We used the Sentence Transformers library maintained by Tom Aarsen of Hugging Face to distribute the embedding process across multiple GPUs. Find an example of how to use multiprocessing for embeddings [here](https://github.com/UKPLab/sentence-transformers/blob/66e0ee30843dd411c64f37f65447bb38c7bf857a/examples/applications/computing-embeddings/computing_embeddings_multi_gpu.py).
|
39 |
|
40 |
+
We improved the inference throughput of the embedding process by using Tri Dao’s Flash Attention. Find the Flash Attention repository [here](https://github.com/Dao-AILab/flash-attention).
|
41 |
|
42 |
You can read the research paper on the BGE embedding models by Shitao Xiao and Zheng Liu [here](https://arxiv.org/pdf/2309.07597.pdf).
|
43 |
|