Does an ONNX model need be specially modified to be supported by AMD Ryzen AI ?
In model card, it is said "We develop a modified version that could be supported by AMD Ryzen AI."
I'm confused by that. Does a model need be modified to be supported by AMD Ryzen AI?
I'm new to ML, but I think any model converted to ONNX format would be supported by AMD Ryzen AI, is that not ture?
We cannot just use a official YOLO model in ONNX format to run with AMD Ryzen AI as execution provider?
@jcyrss You should be able to run a model converting it to ONNX through RyzenAI.
- Export to onnx by fixing the input shape
- Quantize the model using vai_q_onnx
- Deploy on RyzenAI
Please refer to ryzen ai documentation for more details on the specifics involved in each step.
The example that is provided here are AMD optimized models, mostly for getting good accuracy from Quantization. As you might be aware, based on the nature of the model, there could be accuracy loss when you run with Quantized model, so there are few additional options/methods used for these models to get good accuracy with Quantized model.
Hope this answers your query
@NithinG So here is my understanding:
We could run a model from anywhere in ONNX format directly with AMD Ryzen AI as execution provider, and also get performance benefits from it comparing to running on CPU .
But if that model was specially treated by fixing input shape and quantized using vai_q_onnx, we could get more benefits like less accuracy loss and better performance ?
Is that right?
I asked that because I have a 7840U laptop, and want to get performance boost by utilizing TPU inside of it when run some Yolo models. But I want to save some time by not doing extra stuff as ryzen ai documentation mentioned. Is that possible or I have to do those stuff otherwise it could not work?
It is great if running any model could get performance boost by Ryzen AI without any extra work, all you need is only specifying execution provider as 'VitisAIExecutionProvider' in your code.