runtime error
Exit code: 1. Reason: ��████| 2/2 [00:31<00:00, 15.80s/it] `config.hidden_act` is ignored, you should use `config.hidden_activation` instead. Gemma's activation function will be set to `gelu_pytorch_tanh`. Please, use `config.hidden_activation` if you want to override this behaviour. See https://github.com/huggingface/transformers/pull/29402 for more details. Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s][A Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 25, in <module> RAG = RAGMultiModalModel.from_pretrained("vidore/colpali-v1.2", verbose=1) File "/usr/local/lib/python3.10/site-packages/byaldi/RAGModel.py", line 59, in from_pretrained instance.model = ColPaliModel.from_pretrained( File "/usr/local/lib/python3.10/site-packages/byaldi/colpali.py", line 209, in from_pretrained return cls( File "/usr/local/lib/python3.10/site-packages/byaldi/colpali.py", line 69, in __init__ self.model = ColPali.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4225, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4728, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 993, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 329, in set_module_tensor_to_device new_value = value.to(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...