2024-07-11 22:35:40 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40007, worker_address='http://10.140.66.196:40007', controller_address='http://10.140.60.209:10075', model_path='share_internvl/InternVL2-78B/', model_name=None, device='auto', limit_model_concurrency=5, stream_interval=1, load_8bit=False) 2024-07-11 22:35:40 | INFO | model_worker | Loading the model InternVL2-78B on worker 4ae09d ... 2024-07-11 22:35:40 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 2024-07-11 22:35:40 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 2024-07-11 22:35:44 | ERROR | stderr | Loading checkpoint shards: 0%| | 0/33 [00:00} 2024-07-11 22:37:03 | WARNING | transformers.generation.utils | Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation. 2024-07-11 22:37:03 | WARNING | transformers.generation.utils | Both `max_new_tokens` (=2048) and `max_length`(=8192) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation) 2024-07-11 22:37:04 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1 2024-07-11 22:37:08 | ERROR | stderr | Exception in thread Thread-3 (chat): 2024-07-11 22:37:08 | ERROR | stderr | Traceback (most recent call last): 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/threading.py", line 1009, in _bootstrap_inner 2024-07-11 22:37:08 | ERROR | stderr | self.run() 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/threading.py", line 946, in run 2024-07-11 22:37:08 | ERROR | stderr | self._target(*self._args, **self._kwargs) 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-78B/modeling_internvl_chat.py", line 283, in chat 2024-07-11 22:37:08 | ERROR | stderr | generation_output = self.generate( 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2024-07-11 22:37:08 | ERROR | stderr | return func(*args, **kwargs) 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-78B/modeling_internvl_chat.py", line 333, in generate 2024-07-11 22:37:08 | ERROR | stderr | outputs = self.language_model.generate( 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2024-07-11 22:37:08 | ERROR | stderr | return func(*args, **kwargs) 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/utils.py", line 1525, in generate 2024-07-11 22:37:08 | ERROR | stderr | return self.sample( 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/utils.py", line 2641, in sample 2024-07-11 22:37:08 | ERROR | stderr | next_token_scores = logits_processor(input_ids, next_token_logits) 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 97, in __call__ 2024-07-11 22:37:08 | ERROR | stderr | scores = processor(input_ids, scores) 2024-07-11 22:37:08 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 333, in __call__ 2024-07-11 22:37:08 | ERROR | stderr | score = torch.gather(scores, 1, input_ids) 2024-07-11 22:37:08 | ERROR | stderr | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:4 and cuda:0! (when checking argument for argument index in method wrapper_CUDA_gather) 2024-07-11 22:37:12 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:37:19 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:37:34 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:37:49 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:38:04 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:38:19 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:38:34 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:38:49 | INFO | model_worker | Send heart beat. Models: ['InternVL2-78B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1