MoonRide

MoonRide

AI & ML interests

None yet

Recent Activity

Organizations

MoonRide's activity

New activity in speakleash/Bielik-11B-v2.3-Instruct-GGUF-IQ-Imatrix about 2 months ago

imatrix file missing

#1 opened about 2 months ago by MoonRide
Reacted to BlinkDL's post with ๐Ÿ‘€ 2 months ago
view post
Post
4711
RWKV-7 "Goose" preview rc2 => Peak RNN architecture?๐Ÿ˜ƒWill try to squeeze more performance for the final release. Preview code & model: https://github.com/BlinkDL/RWKV-LM/tree/main/RWKV-v7
  • 2 replies
ยท
Reacted to bartowski's post with ๐Ÿ‘ 3 months ago
view post
Post
16123
Decided to try to check how many weights in a 70b F32 model would be squashed when converted to F16 (spoiler, it's shockingly few)

The reason for this comparison is that it should represent the same percentage of squishing as bf16 to fp16

Had claude make me a script, using the new Reflection-70B, and these are the results:

Total weights: 70553706496
Fully representable: 70530215524
Squashed: 23490972
Percentage squashed: 0.03%

0.03%!!!!

A couple things to note, this uses a roundtrip of F32 -> F16 -> F32 and then torch.isclose to account for rounding errors that come up by the very nature of extremely accurate numbers, but it uses VERY small tolerances (rtol=1e-5, atol=1e-8)

This is also examining EVERY weight that was stored at F32, and for most layers I was somewhere between 0% and 0.03% of weights being squashed, no major outliers.

Overall, I feel even safer converting to F16 for llama.cpp, the extremely small number of weights that fall outside the range are likely so small that they don't actually play a role in the final output of the model at inference anyways.
ยท