visheratin
commited on
Commit
•
a47998b
1
Parent(s):
38f1cde
Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,18 @@ tags:
|
|
3 |
- clip
|
4 |
library_name: open_clip
|
5 |
pipeline_tag: zero-shot-image-classification
|
6 |
-
license:
|
|
|
|
|
7 |
---
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
- clip
|
4 |
library_name: open_clip
|
5 |
pipeline_tag: zero-shot-image-classification
|
6 |
+
license: cc-by-nc-4.0
|
7 |
+
datasets:
|
8 |
+
- visheratin/laion-coco-nllb
|
9 |
---
|
10 |
+
|
11 |
+
## Model Summary
|
12 |
+
|
13 |
+
NLLB-CLIP is a model that combines a text encoder from the [NLLB model](https://huggingface.co/facebook/nllb-200-distilled-1.3B) and an image encoder from the
|
14 |
+
LAION [CLIP](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K). This allows us to extend the model capabilities
|
15 |
+
to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the [Crossmodal-3600](https://google.github.io/crossmodal-3600/) dataset by performing very
|
16 |
+
well on low-resource languages. You can find more details about the model in the [paper](https://arxiv.org/abs/2309.01859).
|
17 |
+
|
18 |
+
## Acknowledgements
|
19 |
+
|
20 |
+
I thank [ML Collective](https://mlcollective.org/) for providing Google Cloud compute resources to train the OpenCLIP-compatible version of NLLB-CLIP.
|