Model Card for XProvence-reranker
XProvence is a Zero Cost context pruning model that seamlessly integrates with reranker for retrieval-augmented generation, particularly optimized for question answering. Given a user question and a retrieved passage, XProvence removes sentences from the passage that are not relevant to the user question. This speeds up generation and reduces context noise, in a plug-and-play manner for any LLM.
XProvence extends Provence by supporting 16 languages natively. It also supports 100+ languages through cross lingual transfer, since it is based on BGE-m3 which is pretrained on 100+ languages.
Developed by: Naver Labs Europe
License: XProvence is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license [CC BY-NC-ND 4.0 license].
License file
- Model:
XProvence
- Backbone model: bge-reranker-v2-m3
- Model size: 568 million parameters
- Context length: 8192 tokens
Training and evaluation code & data are available in the Bergen repo
Usage
Provence uses spacy
:
pip install spacy
python -m spacy download xx_sent_ud_sm
Pruning a single context for a single question:
from transformers import AutoModel
xprovence = AutoModel.from_pretrained("naver/XProvence", trust_remote_code=True)
context = "Shepherd’s pie. History. In early cookery books, the dish was a means of using leftover roasted meat of any kind, and the pie dish was lined on the sides and bottom with mashed potato, as well as having a mashed potato crust on top. Variations and similar dishes. Other potato-topped pies include: The modern ”Cumberland pie” is a version with either beef or lamb and a layer of bread- crumbs and cheese on top. In medieval times, and modern-day Cumbria, the pastry crust had a filling of meat with fruits and spices.. In Quebec, a varia- tion on the cottage pie is called ”Paˆte ́ chinois”. It is made with ground beef on the bottom layer, canned corn in the middle, and mashed potato on top.. The ”shepherdess pie” is a vegetarian version made without meat, or a vegan version made without meat and dairy.. In the Netherlands, a very similar dish called ”philosopher’s stew” () often adds ingredients like beans, apples, prunes, or apple sauce.. In Brazil, a dish called in refers to the fact that a manioc puree hides a layer of sun-dried meat."
question = 'What goes on the bottom of Shepherd’s pie?'
xprovence_output = xprovence.process(question, context)
# print(f"XProvence Output: {xprovence_output}")
# XProvence Output: {'reranking_score': 3.022725, pruned_context': 'In early cookery books, the dish was a means of using leftover roasted meat of any kind, and the pie dish was lined on the sides and bottom with mashed potato, as well as having a mashed potato crust on top.']]
You can also pass a list of questions and a list of lists of contexts (multiple contexts per question to be pruned) for batched processing.
Setting always_select_title=True
will keep the first sentence "Shepherd’s pie". This is especially useful for Wikipedia articles where the title is often needed to understand the context.
More details on how the title is defined are given below.
xprovence_output = xprovence.process(question, context, always_select_title=True)
# print(f"XProvence Output: {xprovence_output}")
# XProvence Output: {'reranking_score': 3.022725, pruned_context': 'Shepherd’s pie. In early cookery books, the dish was a means of using leftover roasted meat of any kind, and the pie dish was lined on the sides and bottom with mashed potato, as well as having a mashed potato crust on top.']]
Model interface
Interface of the process
function:
question
:Union[List[str], str]
: an input question (str) or a list of input questions (for batched processing)context
:Union[List[List[str]], str]
: context(s) to be pruned. This can be either a single string (in case of a singe str question), or a list of lists contexts (a list of contexts per question), withlen(contexts)
equal tolen(questions)
title
:Optional[Union[List[List[str]], str]]
, default: “first_sentence”: an optional argument for defining titles. Iftitle=first_sentence
, then the first sentence of each context is assumed to be the title. Iftitle=None
, then it is assumed that no titles are provided. Titles can be also passed as a list of lists of str, i.e. titles shaped the same way as contexts. Titles are only used ifalways_select_title=True
.threshold
(float, , default 0.3): which threshold to use for context pruning. We recommend 0.3 for more conservative pruning (no performance drop or lowest performance drops) and 0.7 for higher compression, but this value can be further tuned to meet the specific use case requirements.always_select_title
(bool, default: True): if True, the first sentence (title) will be included into the selection each time the model select a non-empty selection of sentences. This is important, e.g., for Wikipedia passages, to provide proper contextualization for the next sentences.batch_size
(int, default: 32)reorder
(bool, default: False): if True, the provided contexts for each question will be reordered according to the computed question-passage relevance scores. If False, the original user-provided order of contexts will be preserved.top_k
(int, default: 5): ifreorder=True
, specifies the number of top-ranked passages to keep for each question.enable_warnings
(bool, default: True): whether the user prefers the warning about model usage to be printed, e.g. too long contexts or questions.
Model features
- XProvence natively supports 16 languages.
- XProvence supports 100+ languages via cross lingual transfer.
- XProvence encodes all sentences in the passage together: this enables capturing of coreferences between sentences and provides more accurate context pruning.
- XProvence automatically detects the number of sentences to keep, based on a threshold. We found that the default value of a threshold works well across various domains, but the threshold can be adjusted further to better meet the particular use case needs.
- XProvence works out-of-the-box with any LLM.
Model Details
- Input: user question (e.g., a sentence) + retrieved context passage (e.g., a paragraph)
- Output: pruned context passage, i.e., irrelevant sentences are removed + relevance score (can be used for reranking)
- Model Architecture: The model was initialized from bge-reranker-v2-m3 and finetuned with two objectives: (1) output a binary mask which can be used to prune irrelevant sentences; and (2) preserve initial reranking capabilities.
- Training data: Training queries from MIRACL and documents from Wikipedia, with synthetic silver labelling of which sentences to keep, produced using aya-expanse-8b.
- Languages covered: Arabic, Bengali, English, Spanish, Persian, Finnish, France, Hindi, Indonesian, Japanese, Korean, Russian, Swahili, Telugu, Thai, Chinese
- Context length: 8192 tokens (similar to the pretrained BGE-m3 model)
- Evaluation: we evaluate XProvence on 26 languages from 6 different datasets. We find that XProvence is able to prune irrelevant sentences with little-to-no drop in performance, on all languages, and outperforms existing baselines on the Pareto front.
License
This work is licensed under CC BY-NC-ND 4.0.
- Downloads last month
- 14
Model tree for youssef101/xprovence
Base model
BAAI/bge-reranker-v2-m3