runtime error
Exit code: 1. Reason: -of-00002.safetensors: 0%| | 0.00/3.96G [00:00<?, ?B/s][A model-00002-of-00002.safetensors: 0%| | 2.33M/3.96G [00:02<1:04:05, 1.03MB/s][A model-00002-of-00002.safetensors: 3%|▎ | 122M/3.96G [00:03<01:23, 46.2MB/s] [A model-00002-of-00002.safetensors: 6%|▌ | 227M/3.96G [00:04<01:03, 58.7MB/s][A model-00002-of-00002.safetensors: 10%|█ | 407M/3.96G [00:07<00:57, 61.3MB/s][A model-00002-of-00002.safetensors: 19%|█▉ | 761M/3.96G [00:09<00:28, 111MB/s] [A model-00002-of-00002.safetensors: 32%|███▏ | 1.28G/3.96G [00:10<00:13, 198MB/s][A model-00002-of-00002.safetensors: 59%|█████▉ | 2.36G/3.96G [00:11<00:04, 374MB/s][A model-00002-of-00002.safetensors: 92%|█████████▏| 3.63G/3.96G [00:12<00:00, 586MB/s][A model-00002-of-00002.safetensors: 100%|██████████| 3.96G/3.96G [00:12<00:00, 313MB/s] Traceback (most recent call last): File "/app/demo/gradio_demo.py", line 372, in <module> tokenizer, model, image_processors = load_pretrained_model( File "/app/vlm_fo1/model/builder.py", line 40, in load_pretrained_model model, loading_info = OmChatQwen25VLForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 272, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4395, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2112, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2262, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...