Onnx Model Produces Different Output
#2
by
runski
- opened
I used Optimum to convert the PyTorch version of the model to onnx. Conversion was successful without any error messages. I then ran the model with onnx runtime and the output tokens were different and incorrect. The corresponding text from the output tokens looks like gibberish. Anyone has got an onnx version of the model to run properly?
Hi
@runski
for ONNX you will have to add this PR to HF optimum
https://github.com/huggingface/transformers/pull/30031
I don't think they will have mlp_bias
parameter for llama class
also, keep in mind our model uses both attention_bias and mlp_bias for the model
mayank-mishra
changed discussion status to
closed