--- dataset_info: features: - name: context dtype: string - name: name dtype: string - name: embedding sequence: float32 splits: - name: train num_bytes: 714665 num_examples: 201 download_size: 998567 dataset_size: 714665 configs: - config_name: default data_files: - split: train path: data/train-* --- # Modelo - "Alibaba-NLP/gte-multilingual-base" Puedes obtener toda la informaciĆ³n relacionado con el modelo aquĆ­ # Busqueda ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim from datasets import load_dataset import numpy as np model_name = "Alibaba-NLP/gte-multilingual-base" model = SentenceTransformer(model_name, trust_remote_code=True) raw_data = load_dataset('Manyah/incrustaciones') question = "" question_embedding = model.encode(question) sim = [cos_sim(raw_data['train'][i]['embedding'],question_embedding).numpy() for i in range(0,len(raw_data['train']))] index = sim.index(max(sim)) print(raw_data['train'][index]['context']) ```