lambdaofgod commited on
Commit
7fca8c3
1 Parent(s): e97479a

Add new SentenceTransformer model.

Browse files
.gitattributes CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ 0_WordEmbeddings/pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
0_WordEmbeddings/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:270a73fa12b59a6d46bf6f15e1e2e70d4ca4599baa5f089a05a4e9e18b46d3f6
3
+ size 42848043
0_WordEmbeddings/tokenize_fn.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a30b1330f70b9cbd264085c41abbf6e6622654a5afd27c2cb5f91b8140d63e0
3
+ size 69
0_WordEmbeddings/whitespacetokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
0_WordEmbeddings/wordembedding_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "mlutil.sentence_transformers_utils.CustomTokenizer",
3
+ "update_embeddings": false,
4
+ "max_seq_length": 1000000
5
+ }
1_WordWeights/config.json ADDED
The diff for this file is too large to render. See raw diff
 
2_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 200,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+
8
+ ---
9
+
10
+ # lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
+
14
+ <!--- Describe your model here -->
15
+
16
+ ## Usage (Sentence-Transformers)
17
+
18
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
19
+
20
+ ```
21
+ pip install -U sentence-transformers
22
+ ```
23
+
24
+ Then you can use the model like this:
25
+
26
+ ```python
27
+ from sentence_transformers import SentenceTransformer
28
+ sentences = ["This is an example sentence", "Each sentence is converted"]
29
+
30
+ model = SentenceTransformer('lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl')
31
+ embeddings = model.encode(sentences)
32
+ print(embeddings)
33
+ ```
34
+
35
+
36
+
37
+ ## Evaluation Results
38
+
39
+ <!--- Describe how your model was evaluated -->
40
+
41
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl)
42
+
43
+
44
+
45
+ ## Full Model Architecture
46
+ ```
47
+ SentenceTransformer(
48
+ (0): WordEmbeddings(
49
+ (emb_layer): Embedding(53559, 200)
50
+ )
51
+ (1): WordWeights(
52
+ (emb_layer): Embedding(53559, 1)
53
+ )
54
+ (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
55
+ )
56
+ ```
57
+
58
+ ## Citing & Authors
59
+
60
+ <!--- Describe where people can find more information -->
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.20.0",
5
+ "pytorch": "1.10.0+cu111"
6
+ }
7
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "0_WordEmbeddings",
6
+ "type": "sentence_transformers.models.WordEmbeddings"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_WordWeights",
12
+ "type": "sentence_transformers.models.WordWeights"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Pooling",
18
+ "type": "sentence_transformers.models.Pooling"
19
+ }
20
+ ]