Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench)
|
|
36 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 |
|
37 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 |
|
38 |
|
39 |
-
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5
|
40 |
|
41 |
## Prompt Format
|
42 |
**We highly recommend that you use the prompt format we provide, as follows:**
|
|
|
36 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 |
|
37 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 |
|
38 |
|
39 |
+
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 models use HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found [here](https://huggingface.co/datasets/nvidia/ChatRAG-Bench).
|
40 |
|
41 |
## Prompt Format
|
42 |
**We highly recommend that you use the prompt format we provide, as follows:**
|