File size: 1,291 Bytes
f4d0c7d
 
 
 
 
 
 
 
 
 
 
ef51ffa
 
f4d0c7d
ef51ffa
 
f4d0c7d
ef51ffa
 
 
 
 
 
 
 
 
 
 
 
 
f4d0c7d
b38ef68
 
 
 
 
 
f4d0c7d
b38ef68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
dataset_info:
  features:
  - name: query_id
    dtype: string
  - name: corpus_id
    dtype: string
  - name: score
    dtype: int64
  splits:
  - name: train
    num_bytes: 675736
    num_examples: 24927
  - name: valid
    num_bytes: 39196
    num_examples: 1400
  - name: test
    num_bytes: 35302
    num_examples: 1261
  download_size: 316865
  dataset_size: 750234
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---
Employing the CoIR evaluation framework's dataset version, utilize the code below for assessment:
```python
import coir
from coir.data_loader import get_tasks
from coir.evaluation import COIR
from coir.models import YourCustomDEModel

model_name = "intfloat/e5-base-v2"

# Load the model
model = YourCustomDEModel(model_name=model_name)

# Get tasks
#all task ["codetrans-dl","stackoverflow-qa","apps","codefeedback-mt","codefeedback-st","codetrans-contest","synthetic-
# text2sql","cosqa","codesearchnet","codesearchnet-ccr"]
tasks = get_tasks(tasks=["codetrans-dl"])

# Initialize evaluation
evaluation = COIR(tasks=tasks,batch_size=128)

# Run evaluation
results = evaluation.run(model, output_folder=f"results/{model_name}")
print(results)
```