yutaozhu94 commited on
Commit
21bf235
1 Parent(s): e1f26c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,5 +1,8 @@
1
  ---
2
  license: mit
 
 
 
3
  ---
4
 
5
  # SPRING: Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models
@@ -16,9 +19,6 @@ license: mit
16
  | llama2.13b.chat.added_token_embeddings.pt | [LLaMA-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) |
17
  | llama2.13b.base.added_token_embeddings.pt | [LLaMA-2-7b-base](https://huggingface.co/meta-llama/Llama-2-13b-hf) |
18
 
19
- ## News
20
- - May, 2024: We have released our paper on arXiv. The code and models are preparing and will be released later.
21
-
22
  ## Introduction
23
 
24
  Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs) for generating more factual, accurate, and up-to-date content. Existing methods either optimize prompts to guide LLMs in leveraging retrieved information or directly fine-tune the LLMs to adapt to RAG scenarios. Although fine-tuning can yield better performance, it often compromises the LLMs' general generation capabilities by modifying their parameters. This limitation poses challenges in practical applications, especially when LLMs are already deployed, as parameter adjustments may affect their original functionality. To address this, we propose a novel method that involves learning scalable and pluggable virtual tokens for RAG. By maintaining the LLMs’ original parameters and fine-tuning only the embeddings of these pluggable tokens, our approach not only enhances LLMs’ performance but also preserves their general generation capacities. Furthermore, we design several training strategies to improve the scalability, flexibility, and generalizability of our method. Comprehensive experiments across nine question-answering tasks demonstrate the superiority of our approach.
@@ -105,4 +105,4 @@ Please kindly cite our paper if it helps your research:
105
  eprinttype = {arXiv},
106
  eprint = {2405.19670}
107
  }
108
- ```
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ library_name: transformers
6
  ---
7
 
8
  # SPRING: Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models
 
19
  | llama2.13b.chat.added_token_embeddings.pt | [LLaMA-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) |
20
  | llama2.13b.base.added_token_embeddings.pt | [LLaMA-2-7b-base](https://huggingface.co/meta-llama/Llama-2-13b-hf) |
21
 
 
 
 
22
  ## Introduction
23
 
24
  Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs) for generating more factual, accurate, and up-to-date content. Existing methods either optimize prompts to guide LLMs in leveraging retrieved information or directly fine-tune the LLMs to adapt to RAG scenarios. Although fine-tuning can yield better performance, it often compromises the LLMs' general generation capabilities by modifying their parameters. This limitation poses challenges in practical applications, especially when LLMs are already deployed, as parameter adjustments may affect their original functionality. To address this, we propose a novel method that involves learning scalable and pluggable virtual tokens for RAG. By maintaining the LLMs’ original parameters and fine-tuning only the embeddings of these pluggable tokens, our approach not only enhances LLMs’ performance but also preserves their general generation capacities. Furthermore, we design several training strategies to improve the scalability, flexibility, and generalizability of our method. Comprehensive experiments across nine question-answering tasks demonstrate the superiority of our approach.
 
105
  eprinttype = {arXiv},
106
  eprint = {2405.19670}
107
  }
108
+ ```