Edit model card

MeetPEFT: Parameter Efficient Fine-Tuning on LLMs for Long Meeting Summarization

We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.

The model is fine-tuned on MeetingBank and QMSum datasets.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train MeetPEFT/MeetPEFT-7B-16K