dwzhu commited on
Commit
3baf815
1 Parent(s): dcc3242

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -24
README.md CHANGED
@@ -3,54 +3,137 @@ configs:
3
  - config_name: narrativeqa
4
  data_files:
5
  - split: corpus
6
- path: "narrativeqa/corpus.jsonl"
7
  - split: queries
8
- path: "narrativeqa/queries.jsonl"
9
  - split: qrels
10
- path: "narrativeqa/qrels.jsonl"
11
-
12
  - config_name: summ_screen_fd
13
  data_files:
14
  - split: corpus
15
- path: "summ_screen_fd/corpus.jsonl"
16
  - split: queries
17
- path: "summ_screen_fd/queries.jsonl"
18
  - split: qrels
19
- path: "summ_screen_fd/qrels.jsonl"
20
-
21
  - config_name: qmsum
22
  data_files:
23
  - split: corpus
24
- path: "qmsum/corpus.jsonl"
25
  - split: queries
26
- path: "qmsum/queries.jsonl"
27
  - split: qrels
28
- path: "qmsum/qrels.jsonl"
29
-
30
  - config_name: 2wikimqa
31
  data_files:
32
  - split: corpus
33
- path: "2wikimqa/corpus.jsonl"
34
  - split: queries
35
- path: "2wikimqa/queries.jsonl"
36
  - split: qrels
37
- path: "2wikimqa/qrels.jsonl"
38
-
39
  - config_name: passkey
40
  data_files:
41
  - split: corpus
42
- path: "passkey/corpus.jsonl"
43
  - split: queries
44
- path: "passkey/queries.jsonl"
45
  - split: qrels
46
- path: "passkey/qrels.jsonl"
47
-
48
  - config_name: needle
49
  data_files:
50
  - split: corpus
51
- path: "needle/corpus.jsonl"
52
  - split: queries
53
- path: "needle/queries.jsonl"
54
  - split: qrels
55
- path: "needle/qrels.jsonl"
56
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - config_name: narrativeqa
4
  data_files:
5
  - split: corpus
6
+ path: narrativeqa/corpus.jsonl
7
  - split: queries
8
+ path: narrativeqa/queries.jsonl
9
  - split: qrels
10
+ path: narrativeqa/qrels.jsonl
 
11
  - config_name: summ_screen_fd
12
  data_files:
13
  - split: corpus
14
+ path: summ_screen_fd/corpus.jsonl
15
  - split: queries
16
+ path: summ_screen_fd/queries.jsonl
17
  - split: qrels
18
+ path: summ_screen_fd/qrels.jsonl
 
19
  - config_name: qmsum
20
  data_files:
21
  - split: corpus
22
+ path: qmsum/corpus.jsonl
23
  - split: queries
24
+ path: qmsum/queries.jsonl
25
  - split: qrels
26
+ path: qmsum/qrels.jsonl
 
27
  - config_name: 2wikimqa
28
  data_files:
29
  - split: corpus
30
+ path: 2wikimqa/corpus.jsonl
31
  - split: queries
32
+ path: 2wikimqa/queries.jsonl
33
  - split: qrels
34
+ path: 2wikimqa/qrels.jsonl
 
35
  - config_name: passkey
36
  data_files:
37
  - split: corpus
38
+ path: passkey/corpus.jsonl
39
  - split: queries
40
+ path: passkey/queries.jsonl
41
  - split: qrels
42
+ path: passkey/qrels.jsonl
 
43
  - config_name: needle
44
  data_files:
45
  - split: corpus
46
+ path: needle/corpus.jsonl
47
  - split: queries
48
+ path: needle/queries.jsonl
49
  - split: qrels
50
+ path: needle/qrels.jsonl
51
+ license: apache-2.0
52
+ ---
53
+ ## Introduction
54
+ This repo contains the LongEmbed benchmark proposed in the paper [LongEmbed: Extending Embedding Models for Long Context Retrieval](). Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li, arxiv 2024.04. Github Repo for LongEmbed: https://github.com/dwzhu-pku/LongEmbed.
55
+
56
+ **LongEmbed** is designed to benchmark long context retrieval. It includes two synthetic tasks and four real-world tasks, featuring documents of varying lengths and dispersed target information. It has been integrated into [MTEB](https://github.com/embeddings-benchmark/mteb) for the convenience of evaluation.
57
+
58
+ ## How to use it?
59
+
60
+ #### Loading Data
61
+ LongEmbed contains six datasets: NarrativeQA, QMSum, 2WikiMultihopQA, SummScreenFD, Passkey, and Needle. Each dataset has three splits: corpus, queries, and qrels. The `corpus.jsonl` file contains the documents, the `queries.jsonl` file contains the queries, and the `qrels.jsonl` file describes the relevance. To spefic split of load each dataset, you may use:
62
+
63
+ ```python
64
+ from datasets import load_dataset
65
+
66
+ # dataset_name in ["narrativeqa", "summ_screen_fd", "qmsum", "2wikimqa", "passkey", "needle"]
67
+ # split_name in ["corpus", "queries", "qrels"]
68
+ data_list = load_dataset(path="dwzhu/LongEmbed", name="dataset_name", split="split_name")
69
+ ```
70
+
71
+ #### Evaluation
72
+
73
+ The evaluation of LongEmbed can be easily conducted using MTEB. For the four real tasks, you can evaluate as follows:
74
+
75
+ ```python
76
+ from tabulate import tabulate
77
+ from mteb import MTEB
78
+
79
+ retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
80
+ retrieval_task_results = []
81
+
82
+ evaluation = MTEB(tasks=retrieval_task_list)
83
+ results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
84
+
85
+ for key, value in results.items():
86
+ split = "test" if "test" in value else "validation"
87
+ retrieval_task_results.append([key, value[split]["ndcg_at_1"], value[split]["ndcg_at_10"]])
88
+ output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
89
+
90
+ print(tabulate(retrieval_task_results, headers=["Task", "NDCG@1", "NDCG@10"]))
91
+ ```
92
+
93
+ For the two synthetic tasks, since we examine a broad context range of $ \{0.25,0.5,1,2,4,8,16,32\}\times1024 $ tokens, an additional parameter of `context_length` is required. You may evaluate as follows:
94
+
95
+ ```python
96
+ from tabulate import tabulate
97
+ from mteb import MTEB
98
+
99
+ needle_passkey_score_list = []
100
+ for ctx_len in [256, 512, 1024, 2048, 4096, 8192, 16384, 32768]:
101
+ print(f"Running task: NeedlesRetrieval, PasskeyRetrieval, context length: {ctx_len}")
102
+ evaluation = MTEB(tasks=["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"])
103
+ results = evaluation.run(model, context_length=ctx_len,overwrite_results=True,batch_size=args.batch_size)
104
+ needle_passkey_score_list.append([ctx_len, results["LEMBNeedleRetrieval"]["test"]["ndcg_at_1"], results["LEMBPasskeyRetrieval"]["test"]["ndcg_at_1"]])
105
+
106
+ needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list), sum([x[2] for x in needle_passkey_score_list])/len(context_length_list)])
107
+
108
+ print(tabulate(needle_passkey_score_list, headers=["Context Length", "Needle-ACC", "Passkey-ACC"]))
109
+ ```
110
+
111
+ ## Task Description
112
+
113
+ LongEmbed includes 4 real-world retrieval tasks curated from long-form QA and summarization. Note that for QA and summarization datasets, we use the questions and summaries as queries, respectively.
114
+
115
+ - [NarrativeQA](https://huggingface.co/datasets/narrativeqa): A QA dataset comprising long stories averaging 50,474 words and corresponding questions about specific content such as characters, events. We adopt the `test` set of the original dataset.
116
+ - [2WikiMultihopQA](https://huggingface.co/datasets/THUDM/LongBench/viewer/2wikimqa_e): A multi-hop QA dataset featuring questions with up to 5 hops, synthesized through manually designed templates to prevent shortcut solutions. We use the `test` split of the length-uniformly sampled version from [LongBench](https://huggingface.co/datasets/THUDM/LongBench).
117
+ - [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set.
118
+ - [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls).
119
+
120
+ We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of $\{0.25,0.5,1,2,4,8,16,32\}\times1024$ tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents.
121
+
122
+
123
+ ## Task Statistics
124
+
125
+ | Dataset | Domain | # Queries | # Docs | Avg. Query Words | Avg. Doc Words |
126
+ |---------|--------|-----------|--------|------------------|----------------|
127
+ | NarrativeQA | Literature, File | 10,449 | 355 | 9 | 50,474 |
128
+ | QMSum | Meeting | 1,527 | 197 | 71 | 10,058 |
129
+ | 2WikimQA | Wikipedia | 300 | 300 | 12 | 6,132 |
130
+ | SummScreenFD | ScreenWriting | 336 | 336 | 102 | 5,582 |
131
+ | Passkey | Synthetic | 400 | 800 | 11 | - |
132
+ | Needle | Synthetic | 400 | 800 | 7 | - |
133
+
134
+
135
+ ## Citation
136
+ If you find our paper helpful, please consider cite as follows:
137
+
138
+ ```
139
+ ```