small, tiny, base models
#7
by
eschmidbauer
- opened
Thank you for sharing this code!
Any plans to release small, tiny or base models?
I'm testing medium in-browser using onnx + transformers.js and it's still too slow.
Wondering if any of the smaller models will be available for further testing of in-browser inference.
I found Whisper CPP to be very performant: https://huggingface.co/distil-whisper/distil-medium.en#whispercpp
Not sure if you can run this in-browser, but the CPU-only performance is great.
Training 2 decoder layer versions of small.en
now!
thanks!
eschmidbauer
changed discussion status to
closed