Text Classification
Transformers
Safetensors
English
HHEMv2Config
custom_code
simonhughes22's picture
Update README.md
250806e
|
raw
history blame
4.53 kB
---
license: apache-2.0
language: en
tags:
- microsoft/deberta-v3-base
datasets:
- multi_nli
- snli
- fever
- tals/vitaminc
- paws
metrics:
- accuracy
- auc
- balanced accuracy
---
# Cross-Encoder for Hallucination Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
The model outputs a probabilitity from 0 to 1, 0 being a hallucination and 1 being factually consistent.
The predictions can be thresholded at 0.5 to predict whether a document is consistent with its source.
## Training Data
This model is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) and is trained initially on NLI data to determine textual entailment, before being further fine tuned on summarization datasets with samples annotated for factual consistency including [FEVER](https://huggingface.co/datasets/fever), [Vitamin C](https://huggingface.co/datasets/tals/vitaminc) and [PAWS](https://huggingface.co/datasets/paws).
## Performance
* [TRUE Dataset](https://arxiv.org/pdf/2204.04991.pdf) (Minus Vitamin C, FEVER and PAWS) - 0.872 AUC Score
* [SummaC Benchmark](https://aclanthology.org/2022.tacl-1.10.pdf) (Test Split) - 0.764 Balanced Accuracy, 0.831 AUC Score
* [AnyScale Ranking Test for Hallucinations](https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper) - 86.6 % Accuracy
## Usage with Sentencer Transformers (Recommended)
The model can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('vectara/hallucination_evaluation_model')
scores = model.predict([
["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
["A person on a horse jumps over a broken down airplane.", "A person is at a diner, ordering an omelette."],
["A person on a horse jumps over a broken down airplane.", "A person is outdoors, on a horse."],
["A boy is jumping on skateboard in the middle of a red bridge.", "The boy skates down the sidewalk on a blue bridge"],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond drinking water in public."],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond man wearing a brown shirt is reading a book."],
["Mark Wahlberg was a fan of Manny.", "Manny was a fan of Mark Wahlberg."],
])
```
This returns a numpy array representing a factual consistency score. A score < 0.5 indicates a likely hallucination):
```
array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without the SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
model = AutoModelForSequenceClassification.from_pretrained('vectara/hallucination_evaluation_model')
tokenizer = AutoTokenizer.from_pretrained('vectara/hallucination_evaluation_model')
pairs = [
["A man walks into a bar and buys a drink", "A bloke swigs alcohol at a pub"],
["A person on a horse jumps over a broken down airplane.", "A person is at a diner, ordering an omelette."],
["A person on a horse jumps over a broken down airplane.", "A person is outdoors, on a horse."],
["A boy is jumping on skateboard in the middle of a red bridge.", "The boy skates down the sidewalk on a blue bridge"],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond drinking water in public."],
["A man with blond-hair, and a brown shirt drinking out of a public water fountain.", "A blond man wearing a brown shirt is reading a book."],
["Mark Wahlberg was a fan of Manny.", "Manny was a fan of Mark Wahlberg."],
]
inputs = tokenizer.batch_encode_plus(pairs, return_tensors='pt', padding=True)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits.cpu().detach().numpy()
# convert logits to probabilities
scores = 1 / (1 + np.exp(-logits)).flatten()
```
This returns a numpy array representing a factual consistency score. A score < 0.5 indicates a likely hallucination):
```
array([0.61051559, 0.00047493709, 0.99639291, 0.00021221573, 0.99599433, 0.0014127002, 0.002.8262993], dtype=float32)
```