gmastrapas commited on
Commit
ae96581
·
1 Parent(s): cd77b48

docs: update README

Browse files
Files changed (1) hide show
  1. README.md +116 -4
README.md CHANGED
@@ -1,10 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Jina CLIP
2
 
3
- The Jina CLIP implementation is hosted in this repository. The model uses:
4
- * the EVA 02 architecture for the vision tower
5
- * the Jina BERT with Flash Attention model as a text tower
 
 
 
 
 
 
 
6
 
7
- To use the Jina CLIP model, the following packages are required:
8
  * `torch`
9
  * `timm`
10
  * `transformers`
 
1
+ ---
2
+ tags:
3
+ - transformers
4
+ - xlm-roberta
5
+ - eva02
6
+ - clip
7
+ library_name: transformers
8
+ license: cc-by-nc-4.0
9
+ language:
10
+ - multilingual
11
+ - af
12
+ - am
13
+ - ar
14
+ - as
15
+ - az
16
+ - be
17
+ - bg
18
+ - bn
19
+ - br
20
+ - bs
21
+ - ca
22
+ - cs
23
+ - cy
24
+ - da
25
+ - de
26
+ - el
27
+ - en
28
+ - eo
29
+ - es
30
+ - et
31
+ - eu
32
+ - fa
33
+ - fi
34
+ - fr
35
+ - fy
36
+ - ga
37
+ - gd
38
+ - gl
39
+ - gu
40
+ - ha
41
+ - he
42
+ - hi
43
+ - hr
44
+ - hu
45
+ - hy
46
+ - id
47
+ - is
48
+ - it
49
+ - ja
50
+ - jv
51
+ - ka
52
+ - kk
53
+ - km
54
+ - kn
55
+ - ko
56
+ - ku
57
+ - ky
58
+ - la
59
+ - lo
60
+ - lt
61
+ - lv
62
+ - mg
63
+ - mk
64
+ - ml
65
+ - mn
66
+ - mr
67
+ - ms
68
+ - my
69
+ - ne
70
+ - nl
71
+ - 'no'
72
+ - om
73
+ - or
74
+ - pa
75
+ - pl
76
+ - ps
77
+ - pt
78
+ - ro
79
+ - ru
80
+ - sa
81
+ - sd
82
+ - si
83
+ - sk
84
+ - sl
85
+ - so
86
+ - sq
87
+ - sr
88
+ - su
89
+ - sv
90
+ - sw
91
+ - ta
92
+ - te
93
+ - th
94
+ - tl
95
+ - tr
96
+ - ug
97
+ - uk
98
+ - ur
99
+ - uz
100
+ - vi
101
+ - xh
102
+ - yi
103
+ - zh
104
+ ---
105
+
106
  # Jina CLIP
107
 
108
+ Core implementation of Jina CLIP. The model uses:
109
+ * the [EVA 02](https://github.com/baaivision/EVA/tree/master/EVA-CLIP/rei/eva_clip) architecture for the vision tower
110
+ * the [Jina XLM RoBERTa with Flash Attention](https://huggingface.co/jinaai/xlm-roberta-flash-implementation) model as a text tower
111
+
112
+ ## Models that use this implementation
113
+
114
+ - [jinaai/jina-clip-v2](https://huggingface.co/jinaai/jina-clip-v2)
115
+ - [jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1)
116
+
117
+ ## Requirements
118
 
119
+ To use the Jina CLIP source code, the following packages are required:
120
  * `torch`
121
  * `timm`
122
  * `transformers`