Ha, you know, looks good so far.

#1
by MateoTeo - opened

Congrats!
I didn't test the original EVA, but played a lot with the default Qwen2.5-32B-Instruct while working on the next character card, and I was surprised at how detailed it can be with 4k+ instructions... and how meh it is for RP and creative writing despite good instructions and creative examples from 70b models.

This one, in Q4_K_M, looks good so far. I can feel that it flows more naturally with RP. Still smart and fairly detailed, but also more verbose and "open"... need to test some more.

Thanks! For me personally, I was actually kinda shocked how good it turned out. I have a bunch of scenes for RP when testing models, seeing if it gets things wrong or not, and this one actually passed with flying colors for 30B~ and under. Usually there's a nit pick or two, but the only thing I noticed was it gave me the typical "shivers down spine" once or twice. But that's kinda par for the course it seems, lol.

The original EVA model for 32B, while good, had some logical problems when following instructions for the scene/characters. Meanwhile Instruct is kinda the same deal with your experience; good instruct, meh writing. Luckily merging here didn't seem to water down either of their strengths too much, to where it was noticeable on my end at least. They did just release a new version of EVA for 32B and I'm gonna see if that does better by itself or in a merge next. Maybe even 14B later for fun.

Hmm... starts to break near 16k tokens. Empty replies, bad logic, and jumps into Chinese (with FP12 KV). EVA limit?
As for the rest, this is a smooth sail so far. 15360 tokens limit seems to work fine.

Good luck with your experiments, mate!

UPD. Never mind, that was a processing bug in Kobold.

EVA supposedly did well at 60k from other people's reports.

Screenshot 2024-10-27 at 19-06-40 New Qwen 32B Full Finetune for RP_Storytelling EVA r_LocalLLaMA.png

But if it is just Kobald being weird for a bit, then that's good to know.

MateoTeo changed discussion status to closed
MateoTeo changed discussion status to open

Sign up or log in to comment