LoneStriker's picture
Upload folder using huggingface_hub
ead795a
metadata
license: mit

TIGERScore

Project Page | Paper | Code | 🤗Demo | 🤗TIGERScore-7B | 🤗TIGERScore-13B

Introduction

We present TIGERScore, a Trained metric that follows Instruction Guidance to perform Explainable, and Reference-free evaluation over a wide spectrum of text generation tasks. TIGERScore is guided by natural language instruction to provide error analysis to pinpoint the mistakes in the generated text. Our metric is based on LLaMA-2, trained on our meticulously curated instruction-tuning dataset MetricInstruct which covers 6 text generation tasks and 23 text generation datasets. As a reference-free metric, its correlation can even surpass the best existing reference-based metrics. To further qualitatively assess the rationale generated by our metric, we conduct human evaluation on the generated explanations and found that the explanations are 70.8% accurate. Through these experimental results, we believe TIGERScore demonstrates the possibility of building universal explainable metrics to evaluate any text generation task.

Training Data

The models are trained on the 🤗 MetricInstruct Dataset, which covers 6 text generation tasks and 22 text generation datasets. Check out the dataset card for more details.

Training Procedure

The models are fine-tuned with the MetricInstruct dataset using the original Llama-2 model as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.

Evaluation

TIGERScore significantly surpasses traditional metrics, i.e. BLUE, ROUGE, BARTScore, and BLEURT, and emerging LLM-based metrics as reference-free metrics. Though our dataset was originally sourced from ChatGPT, our distilled model actually outperforms ChatGPT itself, which proves the effectiveness of our filtering strategy. On the unseen task of story generation, TIGERScore also demonstrates reasonable generalization capability.

Tasks→ Summarization Translation Data2Text Long-form QA MathQA Instruction Following Story-Gen Average
GPT-3.5-turbo (few-shot) 38.50 40.53 40.20 29.33 66.46 23.20 4.77 34.71
GPT-4 (zero-shot) 36.46 43.87 44.04 48.95 51.71 58.53 32.48 45.15
BLEU 11.98 19.73 33.29 11.38 21.12 46.61 -1.17 20.42
ROUGE-2f 14.53 17.83 35.49 16.83 22.12 44.56 2.34 21.96
InstructScore 26.33 47.30 43.93 21.62 -4.15 16.19 16.13 23.91
GPTScore-ref 14.73 24.95 39.42 31.60 18.20 33.14 18.24 25.75
BARTScore-cnn(hypo-ref) 13.64 28.53 36.12 29.57 23.35 32.49 26.64 27.19
BARTScore-para (hypo-ref) 17.18 33.72 40.79 28.94 17.27 34.47 17.43 27.11
BERTScore 23.67 42.41 43.75 25.60 11.53 45.77 2.88 27.95
BLEURT 17.30 48.41 48.76 33.26 3.53 36.46 27.52 30.75
UniEval(summ) 47.52 21.90 38.38 41.83 19.78 16.02 44.46 32.84
COMET-22 33.75 56.35 33.92 35.28 -5.53 46.13 39.20 34.16
BARTScore-para (src-hypo) 38.68 9.60 32.26 26.86 -2.70 5.92 20.55 18.74
BARTScore-cnn (src-hypo) 35.50 12.83 34.33 40.96 1.50 25.43 33.48 26.29
Llama-2-13b-chat-0-shot 28.53 14.38 29.24 19.91 1.08 21.37 26.78 20.18
COMETKiwi 16.27 48.48 27.90 18.05 -11.48 34.86 18.47 21.79
GPTScore-src 37.41 8.90 28.82 39.48 14.25 26.46 23.91 25.61
TIGERScore-7B (ours) 35.11 41.50 42.39 47.11 21.23 43.57 39.26 38.60
TIGERScore-13B (ours) 36.81 44.99 45.88 46.22 23.32 47.03 46.36 41.52
Δ (ours - best reference-free) -2 -3 +12 +5 +9 +14 +13 +16

Formatting

To format the data fields into a single prompt for finetuning or testing, We provide the following code for users to refer:

FINETUNE_INST = "You are evaluating errors in a model-generated output for a given instruction."
FINETUNE_INPUT = """\
Instruction: ${generation_instruction}
${input_context}


Model-generated Output:
${hypothesis_output}


For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)

Your evaluation output:
"""
inst_part = Template(FINETUNE_INST)
inst_part = inst_part.substitute(task=task)
input_part = Template(FINETUNE_INPUT)
input_part = input_part.substitute(
    generation_instruction=instruction,
    input_context=input_context,
    hypothesis_output=hypo_output
)
prompt = (inst_part + "\n" + input_part).strip("\n ") + "\n"
encodings = tigerscore_tokenizer(prompt, return_tensors="pt")
input_ids = encodings["input_ids"].to(tigerscore_model.device)
attention_mask = encodings["attention_mask"].to(tigerscore_model.device)

Example of formatted prompt:

You are evaluating errors in a model-generated output for a given instruction.
Instruction: Translate the following text from German to English.
Der künftige EM-Cheforganisator Philipp Lahm soll laut Grindel im DFB-Präsidium mitarbeiten.


Model-generated Output:
According to Grindel, the future head of the European Championships, Philipp Lahm, is to participate in the DFB Presidency.


For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)

Your evaluation output:

Citation

@article{jiang2023TIGERScore,
  title={TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks},
  author={Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, Wenhu Chen},
  journal={arXiv preprint arXiv:2310.00752},
  year={2023}
}