Stable-Platypus2-13B-GGML / requirements.txt
RoversX's picture
Duplicate from RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT-ggml
cf00429
raw
history blame
264 Bytes
--extra-index-url https://pypi.ngc.nvidia.com
nvidia-cuda-runtime
nvidia-cublas
llama-cpp-python @ https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.77/llama_cpp_python-0.1.77-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
pyyaml
torch