Update README.md
#13
by
artemtumch
- opened
Added half precision.
Hi,
@artemtumch
, thanks for your contribution. But this model was trained in FP32
. So, though it can be inferenced in FP16
with very limited difference in performance, I think it's important to mention it in the README instead of setting it as the default one quietly.
Hi, @ZhengPeng7 , you're right.
Still many thanks. Days ago, I made comprehensive experiments on FP16 inference. Even the previously trained FP32 weights can be loaded and work perfectly (~0 difference) in FP16 mode. Therefore, I set FP16 as the default setting for all occurances.
This PR has conflicts and cannot be merged. But still many thanks :)
ZhengPeng7
changed pull request status to
closed