File size: 1,131 Bytes
5d7e808 8dd69a8 5d7e808 53c6d1a 5d7e808 27a8603 06afde1 27a8603 7e46610 5d7e808 8dd69a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
tags:
- generated_from_keras_callback
model-index:
- name: barba
results: []
license: mit
datasets:
- snli
- glue
- clue
- shunk031/JGLUE
- klue
language:
- en
- zh
- ja
- ko
---
# Barba
Barba is a multilingual [natural language inference](http://nlpprogress.com/english/natural_language_inference.html) model for [textual entailment](https://en.wikipedia.org/wiki/Textual_entailment) and [zero-shot text classification](https://joeddav.github.io/blog/2020/05/29/ZSL.html#Classification-as-Natural-Language-Inference), available as an end-to-end service through TensorFlow Serving. Based on [XLM-RoBERTa](https://arxiv.org/abs/1911.02116), it is trained on selected subsets of publicly available English ([GLUE](https://huggingface.co/datasets/glue)), Chinese ([CLUE](https://huggingface.co/datasets/clue)), Japanese ([JGLUE](https://huggingface.co/datasets/shunk031/JGLUE)), Korean ([KLUE](https://huggingface.co/datasets/klue)) datasets, as well as other private datasets.
GitHub: https://github.com/hyperonym/barba
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.1
- Datasets 2.11.0
- Tokenizers 0.13.3 |