Text Classification
Transformers
Safetensors
English
HHEMv2Config
custom_code
simonhughes22 commited on
Commit
905f6e4
1 Parent(s): 250806e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -30,7 +30,7 @@ This model is based on [microsoft/deberta-v3-base](https://huggingface.co/micros
30
 
31
  ## Usage with Sentencer Transformers (Recommended)
32
 
33
- The model can be used like this:
34
 
35
  ```python
36
  from sentence_transformers import CrossEncoder
@@ -52,6 +52,9 @@ This returns a numpy array representing a factual consistency score. A score < 0
52
  array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
53
  ```
54
 
 
 
 
55
  ## Usage with Transformers AutoModel
56
  You can use the model also directly with Transformers library (without the SentenceTransformers library):
57
 
 
30
 
31
  ## Usage with Sentencer Transformers (Recommended)
32
 
33
+ The model can be used like this, on pairs of documents, passed as a list of list of strings (```List[List[str]]]```):
34
 
35
  ```python
36
  from sentence_transformers import CrossEncoder
 
52
  array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
53
  ```
54
 
55
+ Note that the model is designed to work with entire documents, so long as they fit into the 512 token context window (across both documents).
56
+ Also note that the order of the documents is important, the first document is the source document, and the second document is validated against the first for factual consistency, e.g. as a summary of the first or a claim drawn from the source.
57
+
58
  ## Usage with Transformers AutoModel
59
  You can use the model also directly with Transformers library (without the SentenceTransformers library):
60