Could you please explain how is the original whisper tiny model exported to sentis format?

#1
by deleted - opened
deleted

Could you please explain how is the original whisper tiny model exported to sentis format?
Thank you.

deleted

I know that I can use optimum to export encoder and decoder. And I confirmed that it can be replaced by those onnx.
I'd like to know how the LogMelSpectro model is exported.

Hi, we're now including the ONNX files. So you can drag those into Unity Assets folder and click on the "Serialize to Streaming Assets" button in the inspector.

The LogMelSpectro was a custom made file made in torch and exported to onnx format. I'm not sure if we can share the original torch code... I'll ask!

@SetoKaiba What purpose is this for? Training?

deleted

Thank you.

I'm trying to make a MindSpore Lite backend for Sentis. But it needs the model to be converted. But it doesn't recognize sentis format. It only recognizes the onnx format.

https://discussions.unity.com/t/could-you-please-explain-how-is-the-original-whisper-tiny-model-exported-to-sentis-format/349556
I found out the way to export LogMelSpectro now. In case that anyone need it as well, I post the link here.

You can export torch models like this: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

Do you mean you're trying to run it with MindSpore Lite instead of Sentis? Can you say any more about it? It sounds very interesting. Are you trying to run it in Unity?

deleted
β€’
edited Apr 9

I export the whisper encoder decoder successfully with optimum. And I export LogMelSpectro with alexandreribard_unity 's code with a little bit modification to make it compatible with the latest whisper and torch.

Yes. I'm trying to run it in Unity(maybe it should be called Tuanjie instead).
There's a platform called OpenHarmony with the Unity Editor variant called Tuanjie Editor from Yousandi (previously Unity China).
The OpenHarmony comes with AI inference feature based on MindSpore Lite. With it, the model can be run on NPU or other accelerators than CPU and GPU.
But the model needs to be converted to mindspore format before inferencing.

Maybe it should not be called MindSpore Lite backend. The backend doesn't come with any ops.
It just converts the model in onnx to MindSpore Lite format. And it uses MindSpore Lite to inference.

And here's the reasons why I try to achieve this.

  1. The OpenHarmony platform doesn't come with Burst compiler.
  2. The GPU backend may encounter some compatibility issue as the GPU driver of OpenHarmony is some kind of emulation by open source mesa.
  3. I want to compare the performance between NPU with OpenHarmony MindSpore Lite and GPU with Sentis.

Thank you for your help.

Sign up or log in to comment