No GGUF ?
...
also waiting!
When trying to convert with llamacpp I have :
(sati) rob@Robins-MBP-2 llama.cpp % python3 ./convert.py ../phi-2
Loading model file ../phi-2/model-00001-of-00002.safetensors
Loading model file ../phi-2/model-00001-of-00002.safetensors
Loading model file ../phi-2/model-00002-of-00002.safetensors
Traceback (most recent call last):
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 1228, in
main()
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 1161, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 1078, in load_some_model
model_plus = merge_multifile_models(models_plus)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 593, in merge_multifile_models
model = merge_sharded([mp.model for mp in models_plus])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 572, in merge_sharded
return {name: convert(name) for name in names}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 572, in
return {name: convert(name) for name in names}
^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 547, in convert
lazy_tensors: list[LazyTensor] = [model[name] for model in models]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 547, in
lazy_tensors: list[LazyTensor] = [model[name] for model in models]
~~~~~^^^^^^
KeyError: 'transformer.embd.wte.weight'
(sati) rob@Robins-MBP-2 llama.cpp %
Been following this : https://github.com/mrgraycode/llama.cpp/commit/12cc80cb8975aea3bc9f39d3c9b84f7001ab94c5#diff-150dc86746a90bad4fc2c3334aeb9b5887b3adad3cc1459446717638605348efR6239
Here you go : https://huggingface.co/kroonen/phi-2-GGUF/tree/main
Hi kroonen,
Could you create a 4Q version ? The 8Q ver. might be too slow for my cpu. :P
Thanks !