Input model directly with embeddings
Hey,
do you know if there is a way to input phi-2 directly with the tokens embeddings (rather than token ids).
In contrast to other HF models, the forward method does not seem to handle the 'inputs_embeds' argument.
Thanks.
Hi @SergioLimone , you can use
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("susnato/phi-2")
tokenizer = AutoTokenizer.from_pretrained("susnato/phi-2")
to load the model and then pass inputs_embeds
directly.
This will load the phi model from the transformers library and you will be able to use any features that you can use with the other models loaded from the library.
Also, make sure you have the latest transformers
installed.
pip install -U transformers
Great, thanks. However, it seems that "susnato/phi-2" does not support 'device_map="auto"'. Is there an easy fix for that?
Thanks.
Hmm for that I think you can manually push that model either to CPU or GPU
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained("susnato/phi-2").to(device)
tokenizer = AutoTokenizer.from_pretrained("susnato/phi-2")
Hello @SergioLimone !
We will be updating the model's files as soon as our ongoing PR is merged. It will fix any problems related to input_embeds
not being able to be passed.
Regards,
Gustavo.