Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ We introduce Llama3-ChatQA-1.5, which excels at conversational question answerin
|
|
21 |
## Benchmark Results
|
22 |
Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows:
|
23 |
|
24 |
-
| | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b |
|
25 |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|
26 |
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 |
|
27 |
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 |
|
|
|
21 |
## Benchmark Results
|
22 |
Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows:
|
23 |
|
24 |
+
| | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
|
25 |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|
26 |
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 |
|
27 |
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 |
|