beeformer's picture
Update README.md
fa50181 verified
metadata
language: en
license: llama3.1
library_name: sentence-transformers
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
  - transformers
datasets:
  - beeformer/recsys-goodbooks-10k
pipeline_tag: sentence-similarity
base_model:
  - sentence-transformers/all-mpnet-base-v2

Llama-goodbooks-mpnet

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and it is designed to use in recommender systems for content-base filtering and as a side information for cold-start recommendation.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example product description", "Each product description is converted"]
model = SentenceTransformer('beeformer/Llama-goodbooks-mpnet')
embeddings = model.encode(sentences)
print(embeddings)

Training procedure

Pre-training

We use the pretrained sentence-transformers/all-mpnet-base-v2 model. Please refer to the model card for more detailed information about the pre-training procedure.

Fine-tuning

We use the initial model without modifying its architecture or pre-trained model parameters. However, we reduce the processed sequence length to 384 to reduce the training time of the model.

Dataset

We finetuned our model on the Goodbooks-10k dataset with item descriptions generated with meta-llama/Meta-Llama-3.1-8B-Instruct model. For details please see the dataset page beeformer/recsys-goodbooks-10k.

Evaluation Results

For ids of items used for coldstart evaluation please see (links TBA).

Table with results TBA.

Intended uses

This model was trained as a demonstration of capabilities of the beeFormer training framework (link and details TBA) and is intended for research purposes only.

Citation

Preprint available here

TBA