The `gemma-2` models are just weird...
#5
by
jukofyork
- opened
I see you ran using BF16 - did you also try setting to use "eager" attention?
I wonder if they are still broken?
I ran them without changing any settings in the latest llama.cpp. They did indeed both feel a bit broken, like outputting multiple newlines type broken. Behaved as if they were trying outputting invisible tokens from time to time, but couldn't. Likely related to this: https://github.com/ggerganov/llama.cpp/issues/8240#issuecomment-2212444937
Yeah, there seems something strange about it still. I'm not even 100% sure the Transformers implementation is correct.
ChuckMcSneed
changed discussion status to
closed