Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,975 Bytes
faed22a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
348ccac
faed22a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b62116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
faed22a
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
license: mit
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: Multi-source Video Captioning
size_categories:
- 1K<n<10K
---

# Multi-source Video Captioning (MSVC) Dataset Card

## Dataset details

**Dataset type:**
MSVC is a set of collected video captioning data. It is constructed to ensure a robust and thorough evaluation of Video-LLMs' video-captioning capabilities.

**Dataset detail:**
MSVC is introduced to address limitations in existing video caption benchmarks, MSVC samples a total of 1,500 videos with human-annotated captions from [MSVD](https://www.aclweb.org/anthology/P11-1020/), [MSRVTT](http://openaccess.thecvf.com/content_cvpr_2016/papers/Xu_MSR-VTT_A_Large_CVPR_2016_paper.pdf), and [VATEX](http://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_VaTeX_A_Large-Scale_High-Quality_Multilingual_Dataset_for_Video-and-Language_Research_ICCV_2019_paper.pdf), ensuring diverse scenarios and domains. 
Traditional evaluation metrics rely on exact match statistics between generated and ground truth captions, which are limited in capturing the richness of video content. Thus, we use a ChatGPT-assisted evaluation similar to [VideoChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT/blob/main/quantitative_evaluation/README.md). Both generated and human-annotated captions are evaluated by GPT-3.5-turbo (0613) for Correctness of Information and Detailed Orientation.
It is worth noting that each video in the MSVC benchmark is annotated with multiple human-written captions, covering different aspects of the video. This comprehensive annotation ensures a robust and thorough evaluation of Video-LLMs.

**Data instructions**
Please download the raw videos from their official websites and arrange them in the following structure:
```bash
VideoLLaMA2
β”œβ”€β”€ eval
β”‚   β”œβ”€β”€ MSVC
|   |   β”œβ”€β”€ msvd/
|   |   |    β”œβ”€β”€ lw7pTwpx0K0_38_48.avi
|   |   |    └── ... 
|   |   β”œβ”€β”€ msrvtt/
|   |   |    β”œβ”€β”€ video9921.mp4
|   |   |    └── ... 
|   |   β”œβ”€β”€ vatex/
|   |   |    β”œβ”€β”€ 9giWHf6Pf24.mp4
|   |   |    └── ... 
```


**GPT3.5 Evaluation Prompt:**
```python
# Correctness evaluation:
{
    "role": "system",
    "content": 
        "You are an intelligent chatbot designed for evaluating the factual accuracy of generative outputs for video-based question-answer pairs. "
        "Your task is to compare the predicted answer with these correct answers and determine if they are factually consistent. Here's how you can accomplish the task:"
        "------"
        "##INSTRUCTIONS: "
        "- Focus on the factual consistency between the predicted answer and the correct answer. The predicted answer should not contain any misinterpretations or misinformation.\n"
        "- The predicted answer must be factually accurate and align with the video content.\n"
        "- Consider synonyms or paraphrases as valid matches.\n"
        "- Evaluate the factual accuracy of the prediction compared to the answer."
},
{
    "role": "user",
    "content":
        "Please evaluate the following video-based question-answer pair:\n\n"
        f"Question: {question}\n"
        f"Correct Answers: {answer}\n"
        f"Predicted Answer: {pred}\n\n"
        "Provide your evaluation only as a factual accuracy score where the factual accuracy score is an integer value between 0 and 5, with 5 indicating the highest level of factual consistency. "
        "Please generate the response in the form of a Python dictionary string with keys 'score', where its value is the factual accuracy score in INTEGER, not STRING."
        "DO NOT PROVIDE ANY OTHER OUTPUT TEXT OR EXPLANATION. Only provide the Python dictionary string. "
        "For example, your response should look like this: {''score': 4.8}."
}
```
```python
# Detailedness evaluation:
{
    "role": "system",
    "content": "You are an intelligent chatbot designed for evaluating the detail orientation of generative outputs for video-based question-answer pairs. "
    "Your task is to compare the predicted answer with these correct answers and determine its level of detail, considering both completeness and specificity. Here's how you can accomplish the task:"
    "------"
    "##INSTRUCTIONS: "
    "- Check if the predicted answer covers all major points from the video. The response should not leave out any key aspects.\n"
    "- Evaluate whether the predicted answer includes specific details rather than just generic points. It should provide comprehensive information that is tied to specific elements of the video.\n"
    "- Consider synonyms or paraphrases as valid matches.\n"
    "- Provide a single evaluation score that reflects the level of detail orientation of the prediction, considering both completeness and specificity.",
},
{
    "role": "user",
    "content": "Please evaluate the following video-based question-answer pair:\n\n"
    f"Question: {question}\n"
    f"Correct Answers: {answer}\n"
    f"Predicted Answer: {pred}\n\n"
    "Provide your evaluation only as a detail orientation score where the detail orientation score is an integer value between 0 and 5, with 5 indicating the highest level of detail orientation. "
    "Please generate the response in the form of a Python dictionary string with keys 'score', where its value is the detail orientation score in INTEGER, not STRING."
    "DO NOT PROVIDE ANY OTHER OUTPUT TEXT OR EXPLANATION. Only provide the Python dictionary string. "
    "For example, your response should look like this: {''score': 4.8}.",
}
```

**Dataset date:**
MSVC was released in June 2024.

**Paper or resources for more information:**
https://github.com/DAMO-NLP-SG/VideoLLaMA2

**Where to send questions or comments about the model:**
https://github.com/DAMO-NLP-SG/VideoLLaMA2/issues

## Citation

If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
```bibtex
@article{damonlpsg2024videollama2,
  title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
  author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
  journal={arXiv preprint arXiv:2406.07476},
  year={2024},
  url = {https://arxiv.org/abs/2406.07476}
}

@article{damonlpsg2023videollama,
  title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
  author = {Zhang, Hang and Li, Xin and Bing, Lidong},
  journal = {arXiv preprint arXiv:2306.02858},
  year = {2023},
  url = {https://arxiv.org/abs/2306.02858}
}
```

## Intended use
**Primary intended uses:**
The primary use of MSVC is research on Video-LLMs.

**Primary intended users:**
The primary intended users of the dataset are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.