Feature(LLMLingua-2): update the LLMLingua-2 link
#1
by
iofu728
- opened
README.md
CHANGED
@@ -4,30 +4,30 @@ license: cc-by-nc-sa-4.0
|
|
4 |
|
5 |
# LLMLingua-2-Bert-base-Multilingual-Cased-MeetingBank
|
6 |
|
7 |
-
This model was introduced in the paper [**LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression** (Pan et al, 2024)](). It is a [BERT multilingual base model (cased)](https://huggingface.co/google-bert/bert-base-multilingual-cased) finetuned to perform token classification for task agnostic prompt compression. The probability
|
8 |
|
9 |
-
For more details, please check the
|
10 |
|
11 |
## Usage
|
12 |
```python
|
13 |
from llmlingua import PromptCompressor
|
14 |
|
15 |
compressor = PromptCompressor(
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
|
20 |
original_prompt = """John: So, um, I've been thinking about the project, you know, and I believe we need to, uh, make some changes. I mean, we want the project to succeed, right? So, like, I think we should consider maybe revising the timeline.
|
21 |
Sarah: I totally agree, John. I mean, we have to be realistic, you know. The timeline is, like, too tight. You know what I mean? We should definitely extend it.
|
22 |
"""
|
23 |
results = compressor.compress_prompt_llmlingua2(
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
|
32 |
print(results.keys())
|
33 |
print(f"Compressed prompt: {results['compressed_prompt']}")
|
@@ -50,5 +50,12 @@ for word, label in annotated_results[:10]:
|
|
50 |
|
51 |
## Citation
|
52 |
```
|
53 |
-
{
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
```
|
|
|
4 |
|
5 |
# LLMLingua-2-Bert-base-Multilingual-Cased-MeetingBank
|
6 |
|
7 |
+
This model was introduced in the paper [**LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression** (Pan et al, 2024)](https://arxiv.org/abs/2403.12968). It is a [BERT multilingual base model (cased)](https://huggingface.co/google-bert/bert-base-multilingual-cased) finetuned to perform token classification for task agnostic prompt compression. The probability `$p_{preserve}$` of each token `$x_i$` is used as the metric for compression. This model is trained on [an extractive text compression dataset(will public)]() constructed with the methodology proposed in the [**LLMLingua-2**](https://arxiv.org/abs/2403.12968), using training examples from [MeetingBank (Hu et al, 2023)](https://meetingbank.github.io/) as the seed data.
|
8 |
|
9 |
+
For more details, please check the project page of [LLMLingua-2](https://llmlingua.com/llmlingua2.html) and [LLMLingua Series](https://llmlingua.com/).
|
10 |
|
11 |
## Usage
|
12 |
```python
|
13 |
from llmlingua import PromptCompressor
|
14 |
|
15 |
compressor = PromptCompressor(
|
16 |
+
model_name="microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank",
|
17 |
+
use_llmlingua2=True
|
18 |
+
)
|
19 |
|
20 |
original_prompt = """John: So, um, I've been thinking about the project, you know, and I believe we need to, uh, make some changes. I mean, we want the project to succeed, right? So, like, I think we should consider maybe revising the timeline.
|
21 |
Sarah: I totally agree, John. I mean, we have to be realistic, you know. The timeline is, like, too tight. You know what I mean? We should definitely extend it.
|
22 |
"""
|
23 |
results = compressor.compress_prompt_llmlingua2(
|
24 |
+
original_prompt,
|
25 |
+
rate=0.6,
|
26 |
+
force_tokens=['\n', '.', '!', '?', ','],
|
27 |
+
chunk_end_tokens=['.', '\n'],
|
28 |
+
return_word_label=True,
|
29 |
+
drop_consecutive=True
|
30 |
+
)
|
31 |
|
32 |
print(results.keys())
|
33 |
print(f"Compressed prompt: {results['compressed_prompt']}")
|
|
|
50 |
|
51 |
## Citation
|
52 |
```
|
53 |
+
@article{wu2024llmlingua2,
|
54 |
+
title = "{LLML}ingua-2: Context-Aware Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression",
|
55 |
+
author = "Zhuoshi Pan and Qianhui Wu and Huiqiang Jiang and Menglin Xia and Xufang Luo and Jue Zhang and Qingwei Lin and Victor Ruhle and Yuqing Yang and Chin-Yew Lin and H. Vicky Zhao and Lili Qiu and Dongmei Zhang",
|
56 |
+
url = "https://arxiv.org/abs/2403.12968",
|
57 |
+
journal = "ArXiv preprint",
|
58 |
+
volume = "abs/2403.12968",
|
59 |
+
year = "2024",
|
60 |
+
}
|
61 |
```
|