File size: 2,757 Bytes
320f301
 
1b20d4c
 
 
 
 
 
 
 
93e0522
d9d9978
93e0522
ce765bc
93e0522
 
 
 
 
 
1b20d4c
 
1b40805
1f329c2
1b40805
 
1b20d4c
 
 
 
 
320f301
 
ffdfdbe
5667d7c
a471881
 
 
 
 
 
 
 
 
5667d7c
a471881
 
 
 
 
 
5667d7c
 
7c87e57
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: mit
dataset_info:
  features:
  - name: anchor
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  - name: sim_pos
    dtype: float64
  - name: sim_neg
    dtype: float64
  - name: len_anc
    dtype: int64
  - name: len_pos
    dtype: int64
  - name: len_neg
    dtype: int64
  splits:
  - name: train
    num_bytes: 614206347
    num_examples: 1000000
  download_size: 308842392
  dataset_size: 614206347
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---


# Arabic 1 Million Triplets (curated):
This is a curated dataset to use in Arabic ColBERT and SBERT models (among other uses). 
In addition to `anchor`, `positive` and `negative` columns, the dataset has two columns: `sim_pos` and `sim_neg` which are cosine 
similarities between the anchor (query) and bothe positive and negative examples.  
The last 3 columns are lengths (words) for each of the `anchor`, `positive` and `negative` examples. Length uses simple split on space, not tokens.  

The cosine similarity uses an embedding model by [AbderrahmanSkiredj1/Arabic_text_embedding_for_sts](https://huggingface.co/AbderrahmanSkiredj1/Arabic_text_embedding_for_sts)  
(inspired by [Omar Nicar](https://huggingface.co/Omartificial-Intelligence-Space)) who made the
[first Arabic SBERT embeddings model](https://huggingface.co/Omartificial-Intelligence-Space/Arabert-all-nli-triplet-Matryoshka) and a triplets dataset based on NLI.  

# Why another dataset?
While training an Arabic ColBERT model using a sample from the mMARCO dataset, I noticed retrieval issues. It is true all these triplet datasets are translated, but 
quality was not up to expectation. I took the dataset used by the embedding model (which is NLI plus some 300K) and 1 million samples from mMARCO and removed 
lines that had seperate latin words/phrases and sampled 1 million rows of the combined data. Then I added the similiarity columns and lengths.  
This should enable researchers and users to filter based on several criteria (including hard negatives). This is not saying the model used in similarities was perfect. 
In some cases, exmples annotated as negative were identical to the anchor/query. Adding the similarities columns took more time than training models. 

# Arabic SBERT and ColBERT models:
Filtered subsets based on certain criteria show impressive perfrmance. Models will be uploaded and linked from here when ready. 
If you saw earlier versions of triplets datasets under this account, they have been removed in favor of this one. If you downloaded or duplicated a triplets 
dataset from this account prior to Satuday 3 PM Jerusalem time on July 27th, 2024, you are also advised to get the updated version.