Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,90 @@ license: cc-by-4.0
|
|
3 |
pretty_name: AViMoS
|
4 |
size_categories:
|
5 |
- 1K<n<10K
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
pretty_name: AViMoS
|
4 |
size_categories:
|
5 |
- 1K<n<10K
|
6 |
+
---
|
7 |
+
|
8 |
+
# Dataset for ECCV-AIM Video Saliency Prediction Challenge 2024
|
9 |
+
|
10 |
+
[![Page](https://img.shields.io/badge/Challenge-Page-blue)](https://challenges.videoprocessing.ai/challenges/video-saliency-prediction.html)
|
11 |
+
[![Paper](https://img.shields.io/badge/Paper-arXiv-red)](https://arxiv.org/)
|
12 |
+
[![Challenges](https://img.shields.io/badge/Challenges-AIM%202024-orange)](https://cvlai.net/aim/2024/)
|
13 |
+
[![Benchmarks](https://img.shields.io/badge/Benchmarks-VideoProcessing-purple)](https://videoprocessing.ai/benchmarks/)
|
14 |
+
|
15 |
+
We provide a novel audio-visual mouse saliency (<em>AViMoS</em>) dataset with the following key-features:
|
16 |
+
* Diverse content: movie, sports, live, vertical videos, etc.;
|
17 |
+
* Large scale: **1500** videos with mean **19s** duration;
|
18 |
+
* High resolution: all streams are **FullHD**;
|
19 |
+
* **Audio** track saved and played to observers;
|
20 |
+
* Mouse fixations from **>5000** observers (**>70** per video);
|
21 |
+
* License: **CC-BY**;
|
22 |
+
|
23 |
+
File structure:
|
24 |
+
1) `Videos.zip` — 1500 (1000 Train + 500 Test) .mp4 video (kindly reminder: many videos contain an audio stream and users watched the video with the sound turned ON!)
|
25 |
+
|
26 |
+
2) `TrainTestSplit.json` — in this JSON we provide Train/Public Test/Private Test split of all videos
|
27 |
+
|
28 |
+
3) `SaliencyTrain.zip/SaliencyTest.zip` — almost losslessly (crf 0, 10bit, min-max normalized) compressed continuous saliency maps videos for Train/Test subset
|
29 |
+
|
30 |
+
4) `FixationsTrain.zip/FixationsTest.zip` — contains the following files for Train/Test subset:
|
31 |
+
|
32 |
+
* `.../video_name/fixations.json` — per-frame fixations coordinates, from which saliency maps were obtained, this JSON will be used for metrics calculation
|
33 |
+
|
34 |
+
* `.../video_name/fixations/` — binary fixation maps in '.png' format (since some fixations could share the same pixel, this is a lossy representation and is NOT used either in calculating metrics or generating Gaussians, however, we provide them for visualization and frames count checks)
|
35 |
+
|
36 |
+
5) `VideoInfo.json` — meta information about each video (e.g. license)
|
37 |
+
|
38 |
+
## Evaluation
|
39 |
+
|
40 |
+
### Environment setup
|
41 |
+
|
42 |
+
```
|
43 |
+
conda create -n saliency python=3.8.16
|
44 |
+
conda activate saliency
|
45 |
+
pip install numpy==1.24.2 opencv-python==4.7.0.72 tqdm==4.65.0
|
46 |
+
conda install ffmpeg=4.4.2 -c conda-forge
|
47 |
+
```
|
48 |
+
### Run evaluation
|
49 |
+
Archives with videos were accepted from challenge participants as submissions and scored using the same pipeline as in `bench.py`.
|
50 |
+
|
51 |
+
Usage example:
|
52 |
+
|
53 |
+
1) Check that your predictions match the structure and names of the [baseline CenterPrior submission](https://drive.google.com/file/d/1rPgMdb4L79OD2vvpDQyqWZIDox78rmxG/view)
|
54 |
+
2) Install `pip install -r requirments.txt`, `conda install ffmpeg`
|
55 |
+
3) Download and extract `SaliencyTest.zip`, `FixationsTest.zip`, and `TrainTestSplit.json` files from the dataset page
|
56 |
+
4) Run `python bench.py` with flags:
|
57 |
+
* `--model_video_predictions ./SampleSubmission-CenterPrior` — folder with predicted saliency videos
|
58 |
+
* `--model_extracted_frames ./SampleSubmission-CenterPrior-Frames` — folder to store prediction frames (should not exist at launch time), requires ~170 GB of free space
|
59 |
+
* `--gt_video_predictions ./SaliencyTest/Test` — folder from dataset page with gt saliency videos
|
60 |
+
* `--gt_extracted_frames ./SaliencyTest-Frames` — folder to store ground-truth frames (should not exist at launch time), requires ~170 GB of free space
|
61 |
+
* `--gt_fixations_path ./FixationsTest/Test` — folder from dataset page with gt saliency fixations
|
62 |
+
* `--split_json ./TrainTestSplit.json` — JSON from dataset page with names splitting
|
63 |
+
* `--results_json ./results.json` — path to the output results json
|
64 |
+
* `--mode public_test` — public_test/private_test subsets
|
65 |
+
5) The result you get will be available following `results.json` path
|
66 |
+
|
67 |
+
|
68 |
+
## Challenge Leaderboard
|
69 |
+
|
70 |
+
Please follow the paper to learn about the team's solutions, and challenge page for more results.
|
71 |
+
|
72 |
+
Here we only provide the final leaderboard:
|
73 |
+
|
74 |
+
| Team Name | AUC-Judd | CC | SIM | NSS | Rank | #Params (M) |
|
75 |
+
|-----------------|:-----------:|:--------:|:---------:|:---------:|:--------:|:--------------:|
|
76 |
+
| CV_MM | **0.894** | **0.774** | **0.635** | **3.464** | 1.00 | 420.5 |
|
77 |
+
| VistaHL | <ins>0.892</ins> | <ins>0.769</ins> | <ins>0.623</ins> | 3.352 | 2.75 | 187.7 |
|
78 |
+
| PeRCeiVe Lab | 0.857 | <em>0.766</em> | 0.610 | <ins>3.422</ins> | 3.75 | 402.9 |
|
79 |
+
| SJTU-MML | 0.858 | 0.760 | <em>0.615</em> | 3.356 | 4.00 | 1288.7 |
|
80 |
+
| MVP | 0.838 | 0.749 | 0.587 | <em>3.404</em> | 5.00 | 99.6 |
|
81 |
+
| ZenithChaser | <em>0.869</em> | 0.606 | 0.517 | 2.482 | 5.50 | 0.19 |
|
82 |
+
| Exodus | 0.861 | 0.599 | 0.510 | 2.491 | 6.00 | 69.7 |
|
83 |
+
| Baseline (CP) | 0.833 | 0.449 | 0.424 | 1.659 | 8.00 | - |
|
84 |
+
|
85 |
+
##
|
86 |
+
## Citation
|
87 |
+
|
88 |
+
Please cite the paper if you find challenge materials useful for your research:
|
89 |
+
|
90 |
+
`@article{
|
91 |
+
}
|
92 |
+
`
|