koziev ilya commited on
Commit
6aab6f6
1 Parent(s): ee4f669

minor changes in model names thru out the text

Browse files
Files changed (1) hide show
  1. README.md +14 -6
README.md CHANGED
@@ -16,7 +16,7 @@ widget:
16
 
17
  # SBERT_PQ
18
 
19
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 312 dimensional dense vector space and can be used for tasks like clustering or semantic search.
20
 
21
  <!--- Describe your model here -->
22
 
@@ -32,9 +32,9 @@ Then you can use the model like this:
32
 
33
  ```python
34
  from sentence_transformers import SentenceTransformer
35
- sentences = ["This is an example sentence", "Each sentence is converted"]
36
 
37
- model = SentenceTransformer('{MODEL_NAME}')
38
  embeddings = model.encode(sentences)
39
  print(embeddings)
40
  ```
@@ -60,8 +60,8 @@ def mean_pooling(model_output, attention_mask):
60
  sentences = ['This is an example sentence', 'Each sentence is converted']
61
 
62
  # Load model from HuggingFace Hub
63
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
64
- model = AutoModel.from_pretrained('{MODEL_NAME}')
65
 
66
  # Tokenize sentences
67
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -133,4 +133,12 @@ SentenceTransformer(
133
 
134
  ## Citing & Authors
135
 
136
- <!--- Describe where people can find more information -->
 
 
 
 
 
 
 
 
 
16
 
17
  # SBERT_PQ
18
 
19
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps texts & questions to a 312 dimensional dense vector space and can be used for tasks like clustering or semantic search.
20
 
21
  <!--- Describe your model here -->
22
 
 
32
 
33
  ```python
34
  from sentence_transformers import SentenceTransformer
35
+ sentences = ["Кошка ловит мышку.", "Чем занята кошка?"]
36
 
37
+ model = SentenceTransformer('inkoziev/sbert_pq')
38
  embeddings = model.encode(sentences)
39
  print(embeddings)
40
  ```
 
60
  sentences = ['This is an example sentence', 'Each sentence is converted']
61
 
62
  # Load model from HuggingFace Hub
63
+ tokenizer = AutoTokenizer.from_pretrained('inkoziev/sbert_pq')
64
+ model = AutoModel.from_pretrained('inkoziev/sbert_pq')
65
 
66
  # Tokenize sentences
67
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
133
 
134
  ## Citing & Authors
135
 
136
+ ```
137
+ @MISC{rugpt_chitchat,
138
+ author = {Ilya Koziev},
139
+ title = {Texts & Questions Relevancy Model},
140
+ url = {https://huggingface.co/inkoziev/sbert_pq},
141
+ year = 2022
142
+ }
143
+ ```
144
+