AhmedSSabir
commited on
Commit
•
52497dc
1
Parent(s):
89cb1e1
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ Please refer to [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedn
|
|
10 |
|
11 |
# Overview
|
12 |
|
13 |
-
We enrich COCO-caption with **Textual Visual Context** information. We use [ResNet152
|
14 |
object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use [Sentence RoBERTa](https://www.sbert.net) -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with **th**reshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use [BERT followed by a shallow CNN](https://huggingface.co/AhmedSSabir/BERT-CNN-Visual-Semantic) [(Kim, 2014)](https://arxiv.org/pdf/1408.5882.pdf).
|
15 |
|
16 |
|
|
|
10 |
|
11 |
# Overview
|
12 |
|
13 |
+
We enrich COCO-caption with **Textual Visual Context** information. We use [ResNet152](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf), [CLIP](https://github.com/openai/CLIP) and [Faster R-CNN](https://github.com/tensorflow/models/tree/master/research/object_detection) to extract
|
14 |
object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use [Sentence RoBERTa](https://www.sbert.net) -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with **th**reshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use [BERT followed by a shallow CNN](https://huggingface.co/AhmedSSabir/BERT-CNN-Visual-Semantic) [(Kim, 2014)](https://arxiv.org/pdf/1408.5882.pdf).
|
15 |
|
16 |
|