--- language: - th - en tags: - openthaigpt license: mit ---
### Introduction The foundational technology for generative prompt models are Language-Image pretraining models such as CLIP (Contrastive Language-Image Pre-Training) which aligned Language-Image latent of image and text encoder. We can apply latent vector for zero-short classification and image searching. For generative prompt models, we can train generative model using frozen image encoder and then replace image encoder with text encoder to be a prompt of generative model in the inference pipeline. **Scope of work** From limited of computing resources, datasets, engineers we purpose to train CLIP model with 2 stage training of CLIP model - **Stage 1:** Language encoder distillation training We will train Thai (or Bilingual EN-TH) text encoder with original CLIP encoder following Multilingual-CLIP using EN-EN, EN-TH text pairs of machine translation datasets. - **Stage 2:** Continue CLIP pretraining with frozen image encoder Distillation training model may not understand all of token especially specific words. We have to continue CLIP (or LiT, or SigLiT) pretraining with frozen image encoder to learn details of specific words. After we have our own CLIP model we will replace CLIP application text encoder with our own text encoder or we may finetuning application model to push performance of our model. ## How to use - #### Install python package ```python pip thai2transformers==0.1.2 ``` - ### Preprocessing Texts are preprocessed with the following rules: [process_transformers](https://github.com/vistec-AI/thai2transformers/blob/master/thai2transformers/preprocess.py) - Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\
with a line break [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). - Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia. - Replace line breaks with spaces. - Replace more than one spaces with a single space - Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). - Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU) ’s `newmm` dictionary-based maximal matching tokenizer. - Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). since there is no delimitation by space in Thai as in English. - Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>.
- #### How to load the text encoder ```python from transformers import AutoModel,AutoProcessor from thai2transformers.preprocess import process_transformers model = AutoModel.from_pretrained("openthaigpt/CLIPTextCamembertModelWithProjection", trust_remote_code=True) processor = AutoProcessor.from_pretrained("openthaigpt/CLIPTextCamembertModelWithProjection", trust_remote_code=True) input_text = ["This is dog", "how are you today", "สวัสดีครับ วันนี้อากาศร้อนมาก"] processed_input_text = [process_transformers(input_text_) for input_text_ in input_text ] text_tokens = processor(text=processed_input_text, padding=True, return_tensors="pt") embedding = model(**text_tokens).text_embeds print(embedding,embedding.shape) ``` - #### Output: ```python tensor([[ 0.0318, 0.0341, -0.1317, ..., -0.2763, -0.2103, 0.0968], [ 0.0579, -0.1373, -0.0293, ..., -0.3926, -0.2002, -0.0497], [ 0.0303, 0.0440, 0.0217, ..., -0.3282, -0.0100, -0.0757]], grad_fn=) torch.Size([3, 512]) ``` ## Eample of model usage - ### Zero shot classification ```python from torch import FloatTensor, IntTensor, Tensor from transformers import AutoModel, AutoProcessor, CLIPModel # Load image model and processor. image_processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") image_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32").to(device) # Load text model and processor. text_processor = AutoProcessor.from_pretrained("openthaigpt/CLIPTextCamembertModelWithProjection", trust_remote_code=True) text_model = AutoModel.from_pretrained("openthaigpt/CLIPTextCamembertModelWithProjection", trust_remote_code=True).to(device) class_labels = ['แมว','หมา', 'นก'] label2id = {label: i for i, label in enumerate(class_labels)} inputs = text_processor(text=class_labels, padding=True, return_tensors="pt") inputs = {name: tensor.to(self.device) for name, tensor in inputs.items()} text_embeddings = self.text_model(**inputs).text_embeds text_embeddings /= text_embeddings.norm(dim=1, keepdim=True) inputs = image_processor(images=images, return_tensors="pt") inputs = {name: tensor.to(self.device) for name, tensor in inputs.items()} image_embeddings = self.image_model.get_image_features(**inputs) image_embeddings /= image_embeddings.norm(dim=1, keepdim=True) similarities = torch.mm(image_embeddings, text_embeddings.t()) logits = F.softmax(similarities, dim=1) indices = torch.argmax(logits, dim=1) logits = logits.detach().cpu() indices = indices.detach().cpu() predict= [class_labels[i] for i in indices ] ``` - ### Text-Image retrieval ```python import faiss from torch import FloatTensor, IntTensor, Tensor from transformers import AutoModel, AutoProcessor, CLIPModel # Load image model and processor. image_processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") image_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32").to(device) # Load text model and processor. text_processor = AutoProcessor.from_pretrained("openthaigpt/CLIPTextCamembertModelWithProjection", trust_remote_code=True) text_model = AutoModel.from_pretrained("openthaigpt/CLIPTextCamembertModelWithProjection", trust_remote_code=True).to(device) text_input = ['แมวสีส้ม','หมาสีดำ', 'นกสีขาว'] processed_input_text = [process_transformers(input_text_) for input_text_ in input_text ] inputs = text_processor(text=processed_input_text, padding=True, return_tensors="pt") inputs = {name: tensor.to(self.device) for name, tensor in inputs.items()} text_embeddings = self.text_model(**inputs).text_embeds text_embeddings /= text_embeddings.norm(dim=1, keepdim=True) inputs = image_processor(images=images, return_tensors="pt") inputs = {name: tensor.to(self.device) for name, tensor in inputs.items()} image_embeddings = self.image_model.get_image_features(**inputs) image_embeddings /= image_embeddings.norm(dim=1, keepdim=True) n = text_embeddings.shape[1] text_index = faiss.IndexFlatIP(n) image_index = faiss.IndexFlatIP(n) text_index.add(text_embeddings) image_index.add(image_embeddings) # Get_image_search_recall_at_k distances, retrieved_indices = image_index.search(text_embeddings, k=5) recall_image_search = sum(1.0 if i in indices else 0.0 for i, indices in zip(range(n), retrieved_indices) ) / float(n) # Get_text_search_recall_at_k distances, retrieved_indices = text_index.search(image_embeddings, k=5) recall_text_search = sum(1.0 if i in indices else 0.0 for i, indices in zip(range(n), retrieved_indices) ) / float(n) ``` ### Sponsors [](image.png) ### Authors * Konthee Boonmeeprakob (konthee1995@gmail.com) * Norrawich Jitaree (norrawichjitaree@gmail.com) * Prapawin Sakdapetchsiri (prapawin.sak@gmail.com) * Sirasit Tanrattanawong (might.la.fr@gmail.com) * Phumiphat Charoentananuwat (phumiphatcn@gmail.com) * Punnaruck Khapholdi (punnaruck@gmail.com) * Isada Sukprapa (isada@nextai.co.th) * Monthol Charattrakool (anthrax581@gmail.com) * Peerawat Rojratchadakorn (peerawat.roj@gmail.com)