Datasets:
tianyu-z
commited on
Commit
•
dd76206
1
Parent(s):
bf8198e
README.md
CHANGED
@@ -74,9 +74,10 @@ We support open-source model_id:
|
|
74 |
"THUDM/cogvlm2-llama3-chat-19B",
|
75 |
"echo840/Monkey-Chat",]
|
76 |
```
|
77 |
-
For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline.
|
78 |
|
79 |
```bash
|
|
|
80 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
81 |
# Inference from the VLMs and save the results to {model_id}_{difficulty}_{language}.json
|
82 |
cd src/evaluation
|
@@ -90,19 +91,23 @@ python3 gather_results.py --jsons_path .
|
|
90 |
```
|
91 |
|
92 |
### Close-source evaluation
|
93 |
-
We provide the evaluation script for the close-source
|
94 |
|
95 |
You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
|
96 |
```bash
|
|
|
97 |
cd src/evaluation
|
98 |
-
# save the testing dataset to the path
|
99 |
python3 save_image_from_dataset.py --output_path .
|
|
|
|
|
|
|
100 |
|
101 |
-
#
|
102 |
-
python3
|
103 |
|
104 |
# Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
|
105 |
-
python3 evaluation_metrics.py --model_id gpt4o --output_path . --json_filename "gpt4o_en_easy.json" --dataset_handler "vcr-org/VCR-wiki-en-easy-test"
|
106 |
|
107 |
# To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
|
108 |
python3 gather_results.py --jsons_path .
|
@@ -115,7 +120,6 @@ pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
|
|
115 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
116 |
python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
|
117 |
```
|
118 |
-
|
119 |
`lmms-eval` supports the following VCR `--tasks` settings:
|
120 |
|
121 |
* English
|
|
|
74 |
"THUDM/cogvlm2-llama3-chat-19B",
|
75 |
"echo840/Monkey-Chat",]
|
76 |
```
|
77 |
+
For the models not on list, they are not intergated with huggingface, please refer to their github repo to create the evaluation pipeline. Examples of the inference logic are in `src/evaluation/inference.py`
|
78 |
|
79 |
```bash
|
80 |
+
pip install -r requirements.txt
|
81 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
82 |
# Inference from the VLMs and save the results to {model_id}_{difficulty}_{language}.json
|
83 |
cd src/evaluation
|
|
|
91 |
```
|
92 |
|
93 |
### Close-source evaluation
|
94 |
+
We provide the evaluation script for the close-source models in `src/evaluation/closed_source_eval.py`.
|
95 |
|
96 |
You need an API Key, a pre-saved testing dataset and specify the path of the data saving the paper
|
97 |
```bash
|
98 |
+
pip install -r requirements.txt
|
99 |
cd src/evaluation
|
100 |
+
# [download images to inference locally option 1] save the testing dataset to the path using script from huggingface
|
101 |
python3 save_image_from_dataset.py --output_path .
|
102 |
+
# [download images to inference locally option 2] save the testing dataset to the path using github repo
|
103 |
+
# use en-easy-test-500 as an example
|
104 |
+
git clone https://github.com/tianyu-z/VCR-wiki-en-easy-test-500.git
|
105 |
|
106 |
+
# specify your image path if you would like to inference using the image stored locally by --image_path "path_to_image", otherwise, the script will streaming the images from github repo
|
107 |
+
python3 closed_source_eval.py --model_id gpt4o --dataset_handler "VCR-wiki-en-easy-test-500" --api_key "Your_API_Key"
|
108 |
|
109 |
# Evaluate the results and save the evaluation metrics to {model_id}_{difficulty}_{language}_evaluation_result.json
|
110 |
+
python3 evaluation_metrics.py --model_id gpt4o --output_path . --json_filename "gpt4o_en_easy.json" --dataset_handler "vcr-org/VCR-wiki-en-easy-test-500"
|
111 |
|
112 |
# To get the mean score of all the `{model_id}_{difficulty}_{language}_evaluation_result.json` in `jsons_path` (and the std, confidence interval if `--bootstrap`) of the evaluation metrics
|
113 |
python3 gather_results.py --jsons_path .
|
|
|
120 |
# We use HuggingFaceM4/idefics2-8b and vcr_wiki_en_easy as an example
|
121 |
python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --model idefics2 --model_args pretrained="HuggingFaceM4/idefics2-8b" --tasks vcr_wiki_en_easy --batch_size 1 --log_samples --log_samples_suffix HuggingFaceM4_idefics2-8b_vcr_wiki_en_easy --output_path ./logs/
|
122 |
```
|
|
|
123 |
`lmms-eval` supports the following VCR `--tasks` settings:
|
124 |
|
125 |
* English
|