MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models Paper • 2410.17578 • Published Oct 23 • 1
Multilingual RewardBench Collection Multilingual Reward Model Evaluation Dataset and Results • 2 items • Updated Oct 26 • 4
MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models Paper • 2410.17578 • Published Oct 23 • 1
M-RewardBench: Evaluating Reward Models in Multilingual Settings Paper • 2410.15522 • Published Oct 20 • 10
M-RewardBench: Evaluating Reward Models in Multilingual Settings Paper • 2410.15522 • Published Oct 20 • 10 • 3
M-RewardBench: Evaluating Reward Models in Multilingual Settings Paper • 2410.15522 • Published Oct 20 • 10
Retrieval-Augmented Generation Collection Artifacts for "Open-RAG: Enhanced Retrieval Augmented Reasoning with Open-Source Large Language Models" [EMNLP 2024 Findings] • 4 items • Updated 28 days ago
Retrieval-Augmented Generation Collection Artifacts for "Open-RAG: Enhanced Retrieval Augmented Reasoning with Open-Source Large Language Models" [EMNLP 2024 Findings] • 4 items • Updated 28 days ago
Retrieval-Augmented Generation Collection Artifacts for "Open-RAG: Enhanced Retrieval Augmented Reasoning with Open-Source Large Language Models" [EMNLP 2024 Findings] • 4 items • Updated 28 days ago