metadata
dataset_info:
features:
- name: context
dtype: string
- name: name
dtype: string
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 717888
num_examples: 202
download_size: 1005715
dataset_size: 717888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- sentence-similarity
language:
- es
Modelo
- "Alibaba-NLP/gte-multilingual-base"
Puedes obtener toda la información relacionado con el modelo aquí
Busqueda
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
from datasets import load_dataset
import numpy as np
model_name = "Alibaba-NLP/gte-multilingual-base"
model = SentenceTransformer(model_name, trust_remote_code=True)
raw_data = load_dataset('Manyah/incrustaciones')
question = ""
question_embedding = model.encode(question)
sim = [cos_sim(raw_data['train'][i]['embedding'],question_embedding).numpy() for i in range(0,len(raw_data['train']))]
index = sim.index(max(sim))
print(raw_data['train'][index]['context'])