Collections
Discover the best community collections!
Collections including paper arxiv:2212.10496
-
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever
Paper • 2408.16672 • Published • 6 -
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 2 -
Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent
Paper • 2304.09542 • Published • 4 -
Making Text Embedders Few-Shot Learners
Paper • 2409.15700 • Published • 29
-
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 12 -
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Paper • 2401.08406 • Published • 37 -
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Paper • 2104.08663 • Published • 3 -
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 2
-
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 12 -
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Paper • 2401.08406 • Published • 37 -
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Paper • 2104.08663 • Published • 3 -
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 2
-
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 12 -
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
Paper • 2401.08406 • Published • 37 -
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Paper • 2104.08663 • Published • 3 -
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 2
-
Cognitive Architectures for Language Agents
Paper • 2309.02427 • Published • 8 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 48 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 70 -
Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models
Paper • 2311.00871 • Published • 2
-
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Paper • 2310.01352 • Published • 7 -
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Paper • 2203.11171 • Published • 1 -
MemGPT: Towards LLMs as Operating Systems
Paper • 2310.08560 • Published • 7 -
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
Paper • 2310.06117 • Published • 3