I believe the model works best when processed sentence by sentence

#2
by Aid3445 - opened

This truly is a revolutionary model, and I'm really impressed with it. I think the model could benefit from auto-chunking longer texts because otherwise the model breaks down and stops being able to accurately pronounce words.

OpenBMB org

Thanks for the suggestion. We actually tried that approach before, but we found the results weren't great. The resulting audio didn't sound very consistent, so we decided to remove that feature for this version.

So what's thte best way to generate long/infinite sentences/paragraphs if it doesn't auto chunk?
Also what about streaming?

Sign up or log in to comment