Transformers
Safetensors
PEFT
Inference Endpoints
Edit model card

Model Card

Mistral 7B Instruct v0.2 CT-Choice is a fine-tuned Mistral 7B Instruct v0.2 model that provides well-calibrated confidence estimates for multiple-choice question answering.

The model is fine-tuned (calibration-tuned) using a dataset of multiple-choice generations from mistralai/Mistral-7B-Instruct-v0.2, labeled for correctness. At test/inference time, the probability of correctness defines the confidence of the model in its answer. For full details, please see our paper and supporting code.

Other Models: We also release a broader collection of Multiple-Choice CT Models.

Usage

This adapter model is meant to be used on top of mistralai/Mistral-7B-Instruct-v0.2 model generations.

The confidence estimation pipeline follows these steps,

  1. Load base model and PEFT adapter.
  2. Disable adapter and generate answer.
  3. Enable adapter and generate confidence.

All standard guidelines for the base model's generation apply.

For a complete example, see play.py at the supporting code repository.

NOTE: Using the adapter for generations may hurt downstream task accuracy and confidence estimates. We recommend using the adapter to estimate only confidence.

License

The model is released under the original model's Apache 2.0 license.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for calibration-tuning/Mistral-7B-Instruct-v0.2-ct-choice

Finetuned
(366)
this model

Dataset used to train calibration-tuning/Mistral-7B-Instruct-v0.2-ct-choice

Collection including calibration-tuning/Mistral-7B-Instruct-v0.2-ct-choice