nielsr HF staff commited on
Commit
18704bb
1 Parent(s): be5b8a5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ tags:
5
+ - vision
6
+ - image-to-text
7
+ inference: false
8
+ model_name: microsoft/git-base-msrvtt-qa
9
+ ---
10
+
11
+ # GIT (GenerativeImage2Text), large-sized, fine-tuned on MSRVTT-QA
12
+
13
+ GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on MSRVTT-QA. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text).
14
+
15
+ Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
16
+
17
+ ## Model description
18
+
19
+ GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs.
20
+
21
+ The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens.
22
+
23
+ The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token.
24
+
25
+ ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg)
26
+
27
+ This allows the model to be used for tasks like:
28
+
29
+ - image and video captioning
30
+ - visual question answering (VQA) on images and videos
31
+ - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
32
+
33
+ ## Intended uses & limitations
34
+
35
+ You can use the raw model for video question answering (QA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for
36
+ fine-tuned versions on a task that interests you.
37
+
38
+ ### How to use
39
+
40
+ For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html).
41
+
42
+ ## Training data
43
+
44
+ From the paper:
45
+
46
+ > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions
47
+ (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016),
48
+ Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B
49
+ data following a similar collection procedure in Hu et al. (2021a).
50
+
51
+ => however this is for the model referred to as "GIT" in the paper, which is not open-sourced.
52
+
53
+ This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs.
54
+
55
+ Next, the model was fine-tuned on MSRVTT-QA.
56
+
57
+ See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
58
+
59
+ ### Preprocessing
60
+
61
+ We refer to the original repo regarding details for preprocessing during training.
62
+
63
+ During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
64
+
65
+ ## Evaluation results
66
+
67
+ For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).