Expected minimum hardware requirements for inference?
Title is self-explanatory. π
Not easy to give you a reliable answer, given that "minimum" requirement could be also CPU inference... if you are willing to wait minutes for a few tokens π
That said, we are hosting our demo on an NVIDIA A10G, and it looks pretty fast!
Further quantization with LLM.int8() would help you load the model even on smaller GPUs.
Not easy to give you a reliable answer, given that "minimum" requirement could be also CPU inference... if you are willing to wait minutes for a few tokens π
That said, we are hosting our demo on an NVIDIA A10G, and it looks pretty fast!
Further quantization with LLM.int8() would help you load the model even on smaller GPUs.
hello, is the demo hosted naively with transfomers as described in model card ? I thought it is hosted with optimized method such as fastertransfomer.
It's hosted natively on the GPU as described in the model card with bfloat16 precision, but without any flash attention β i.e. the attn_impl kwarg defaults to 'torch'.
It's hosted natively on the GPU as described in the model card with bfloat16 precision, but without any flash attention β i.e. the
attn_implkwarg defaults to 'torch'.
When I set attn_impl to flash , the flash_attn can not be matched with alibi , so wpe layer must be newly initialized.
My question is :
- Whether you will release the
flash_attnversion model ? That model have thewpelayer that match theflash_attnconfig. - How faster is the
flash_attnthantorch? Will it be faster in the inference process ? - Weather you will release faster_transformer version of code ? It will benefit many people.
ggml just merged CPU inference implementation:
https://github.com/ggerganov/ggml/tree/master/examples/replit
it's pretty fast on my M1 16GB MB Air