runtime error

Exit code: 1. Reason: attn-2.6.3-py3-none-any.whl size=144265 sha256=45a276aa6b420f6474f75e9c4f74347bff188a92b9119b906baaae7feccabe18 Stored in directory: /home/user/.cache/pip/wheels/e4/a5/97/ac7090d37d0937de252e0f8e611e3e15e56dfb660eff5e8a76 Successfully built flash-attn Installing collected packages: einops, flash-attn Successfully installed einops-0.8.0 flash-attn-2.6.3 [notice] A new release of pip available: 22.3.1 -> 24.2 [notice] To update, run: /usr/local/bin/python3.10 -m pip install --upgrade pip Downloading shards: 0%| | 0/4 [00:00<?, ?it/s] Downloading shards: 25%|██▌ | 1/4 [00:14<00:44, 14.88s/it] Downloading shards: 50%|█████ | 2/4 [00:29<00:29, 14.73s/it] Downloading shards: 75%|███████▌ | 3/4 [00:44<00:14, 14.77s/it] Downloading shards: 100%|██████████| 4/4 [00:49<00:00, 10.79s/it] Downloading shards: 100%|██████████| 4/4 [00:49<00:00, 12.25s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 15, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, token=hf_token, torch_dtype=torch.float16, device_map="auto", attn_implementation="flash_attention_2") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3880, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1572, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1708, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.

Container logs:

Fetching error logs...