Edit model card

This is an NLI model based on T5-XXL that predicts a binary label ('1' - Entailment, '0' - No entailment).

It is trained similarly to the NLI model described in the TRUE paper (Honovich et al, 2022), but using the following datasets instead of ANLI:

The input format for the model is: "premise: PREMISE_TEXT hypothesis: HYPOTHESIS_TEXT".

If you use this model for a research publication, please cite the TRUE paper (using the bibtex entry below) and the dataset papers mentioned above.

@inproceedings{honovich-etal-2022-true-evaluating,
    title = "{TRUE}: Re-evaluating Factual Consistency Evaluation",
    author = "Honovich, Or  and
      Aharoni, Roee  and
      Herzig, Jonathan  and
      Taitelbaum, Hagai  and
      Kukliansy, Doron  and
      Cohen, Vered  and
      Scialom, Thomas  and
      Szpektor, Idan  and
      Hassidim, Avinatan  and
      Matias, Yossi",
    booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.naacl-main.287",
    doi = "10.18653/v1/2022.naacl-main.287",
    pages = "3905--3920",
}
Downloads last month
2,867
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train google/t5_xxl_true_nli_mixture