Dhia-GB commited on
Commit
f3a24d2
1 Parent(s): f518337

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -87,7 +87,10 @@ print(response)
87
  <br>
88
 
89
  ## Benchmarks
90
- We report in the following table our internal pipeline benchmarks:
 
 
 
91
 
92
 
93
 
 
87
  <br>
88
 
89
  ## Benchmarks
90
+ We report in the following table our internal pipeline benchmarks.
91
+ - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
92
+ - We report **raw scores** obtained by applying chat template **without fewshot_as_multiturn** (unlike Llama3.1).
93
+ - We use same batch-size across all models.
94
 
95
 
96