2024-07-11 22:15:24 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40006, worker_address='http://10.140.60.25:40006', controller_address='http://10.140.60.209:10075', model_path='share_internvl/InternVL2-40B/', model_name=None, device='auto', limit_model_concurrency=5, stream_interval=1, load_8bit=False) 2024-07-11 22:15:24 | INFO | model_worker | Loading the model InternVL2-40B on worker 1380b0 ... 2024-07-11 22:15:24 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 2024-07-11 22:15:24 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 2024-07-11 22:15:30 | ERROR | stderr | /mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:397: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. 2024-07-11 22:15:30 | ERROR | stderr | warnings.warn( 2024-07-11 22:15:33 | ERROR | stderr | Loading checkpoint shards: 0%| | 0/17 [00:00: Failed to establish a new connection: [Errno 111] Connection refused')) 2024-07-11 22:18:13 | INFO | model_worker | Register to controller 2024-07-11 22:18:17 | INFO | stdout | INFO: 10.140.60.209:42746 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:18:21 | INFO | stdout | INFO: 10.140.60.209:42872 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:18:22 | INFO | stdout | INFO: 10.140.60.209:42892 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:18:22 | INFO | stdout | INFO: 10.140.60.209:42912 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:18:28 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:18:43 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:18:58 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:19:13 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:19:28 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:19:32 | INFO | stdout | INFO: 10.140.60.209:43356 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:19:32 | INFO | stdout | INFO: 10.140.60.209:43372 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:19:43 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:19:58 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0 2024-07-11 22:19:59 | INFO | stdout | INFO: 10.140.60.209:43468 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:19:59 | INFO | stdout | INFO: 10.140.60.209:43487 - "POST /worker_get_status HTTP/1.1" 200 OK 2024-07-11 22:20:00 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1 2024-07-11 22:20:00 | INFO | stdout | INFO: 10.140.60.209:43492 - "POST /worker_generate_stream HTTP/1.1" 200 OK 2024-07-11 22:20:00 | INFO | model_worker | max_input_tile_list: [12] 2024-07-11 22:20:00 | INFO | model_worker | Split images to torch.Size([13, 3, 448, 448]) 2024-07-11 22:20:00 | INFO | model_worker | [] 2024-07-11 22:20:00 | INFO | model_worker | Generation config: {'num_beams': 1, 'max_new_tokens': 2048, 'do_sample': True, 'temperature': 0.8, 'repetition_penalty': 1.1, 'max_length': 8192, 'top_p': 0.7, 'streamer': } 2024-07-11 22:20:02 | WARNING | transformers.generation.utils | Both `max_new_tokens` (=2048) and `max_length`(=8192) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation) 2024-07-11 22:20:05 | ERROR | stderr | Exception in thread Thread-3 (chat): 2024-07-11 22:20:05 | ERROR | stderr | Traceback (most recent call last): 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/threading.py", line 1009, in _bootstrap_inner 2024-07-11 22:20:05 | ERROR | stderr | self.run() 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/threading.py", line 946, in run 2024-07-11 22:20:05 | ERROR | stderr | self._target(*self._args, **self._kwargs) 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-40B/modeling_internvl_chat.py", line 280, in chat 2024-07-11 22:20:05 | ERROR | stderr | generation_output = self.generate( 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2024-07-11 22:20:05 | ERROR | stderr | return func(*args, **kwargs) 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-40B/modeling_internvl_chat.py", line 330, in generate 2024-07-11 22:20:05 | ERROR | stderr | outputs = self.language_model.generate( 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2024-07-11 22:20:05 | ERROR | stderr | return func(*args, **kwargs) 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/transformers/generation/utils.py", line 1525, in generate 2024-07-11 22:20:05 | ERROR | stderr | return self.sample( 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/transformers/generation/utils.py", line 2641, in sample 2024-07-11 22:20:05 | ERROR | stderr | next_token_scores = logits_processor(input_ids, next_token_logits) 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 97, in __call__ 2024-07-11 22:20:05 | ERROR | stderr | scores = processor(input_ids, scores) 2024-07-11 22:20:05 | ERROR | stderr | File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 333, in __call__ 2024-07-11 22:20:05 | ERROR | stderr | score = torch.gather(scores, 1, input_ids) 2024-07-11 22:20:05 | ERROR | stderr | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper_CUDA_gather) 2024-07-11 22:20:10 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:20:13 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:20:28 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:20:43 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:20:58 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:21:13 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:21:29 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:21:44 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:21:59 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:22:14 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:22:29 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:22:44 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:22:59 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:23:14 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:23:29 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:23:44 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:23:59 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:24:14 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:24:29 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:24:44 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:24:59 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:25:14 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:25:29 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:25:44 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:25:59 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:26:14 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:26:29 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1 2024-07-11 22:26:44 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1