Rename the model.
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license: cc-by-nc-sa-4.0
|
|
4 |
|
5 |
# LLMLingua-2-Bert-base-Multilingual-Cased-MeetingBank
|
6 |
|
7 |
-
This model was introduced in the paper [**LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression** (Pan et al, 2024)](). It is a [BERT multilingual base model (cased)](https://huggingface.co/google-bert/bert-base-multilingual-cased) finetuned to perform token classification for task agnostic prompt compression. The probability $p_{preserve}$ of each token $x_i$ is used as the metric for compression. This model is trained on an extractive text compression dataset constructed with the methodology proposed in the [LLMLingua-2], using training examples from [MeetingBank (Hu et al, 2023)](https://meetingbank.github.io/) as the seed data.
|
8 |
|
9 |
For more details, please check the home page of [LLMLingua-2]() and [LLMLingua Series](https://llmlingua.com/).
|
10 |
|
@@ -13,7 +13,7 @@ For more details, please check the home page of [LLMLingua-2]() and [LLMLingua S
|
|
13 |
from llmlingua import PromptCompressor
|
14 |
|
15 |
compressor = PromptCompressor(
|
16 |
-
model_name="
|
17 |
use_llmlingua2=True
|
18 |
)
|
19 |
|
|
|
4 |
|
5 |
# LLMLingua-2-Bert-base-Multilingual-Cased-MeetingBank
|
6 |
|
7 |
+
This model was introduced in the paper [**LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression** (Pan et al, 2024)](). It is a [BERT multilingual base model (cased)](https://huggingface.co/google-bert/bert-base-multilingual-cased) finetuned to perform token classification for task agnostic prompt compression. The probability $p_{preserve}$ of each token $x_i$ is used as the metric for compression. This model is trained on [an extractive text compression dataset]() constructed with the methodology proposed in the [LLMLingua-2], using training examples from [MeetingBank (Hu et al, 2023)](https://meetingbank.github.io/) as the seed data.
|
8 |
|
9 |
For more details, please check the home page of [LLMLingua-2]() and [LLMLingua Series](https://llmlingua.com/).
|
10 |
|
|
|
13 |
from llmlingua import PromptCompressor
|
14 |
|
15 |
compressor = PromptCompressor(
|
16 |
+
model_name="microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank",
|
17 |
use_llmlingua2=True
|
18 |
)
|
19 |
|