File size: 17,449 Bytes
3f1b7f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
2024-07-11 22:32:13 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40006, worker_address='http://10.140.60.25:40006', controller_address='http://10.140.60.209:10075', model_path='share_internvl/InternVL2-40B/', model_name=None, device='auto', limit_model_concurrency=5, stream_interval=1, load_8bit=False)
2024-07-11 22:32:13 | INFO | model_worker | Loading the model InternVL2-40B on worker 762b3d ...
2024-07-11 22:32:13 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-07-11 22:32:13 | WARNING | transformers.tokenization_utils_base | Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-07-11 22:32:15 | ERROR | stderr | /mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:397: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
2024-07-11 22:32:15 | ERROR | stderr |   warnings.warn(
2024-07-11 22:32:16 | ERROR | stderr | 
Loading checkpoint shards:   0%|          | 0/17 [00:00<?, ?it/s]
2024-07-11 22:32:18 | ERROR | stderr | 
Loading checkpoint shards:   6%|▌         | 1/17 [00:01<00:31,  1.98s/it]
2024-07-11 22:32:20 | ERROR | stderr | 
Loading checkpoint shards:  12%|█▏        | 2/17 [00:03<00:29,  1.95s/it]
2024-07-11 22:32:22 | ERROR | stderr | 
Loading checkpoint shards:  18%|█▊        | 3/17 [00:05<00:27,  1.94s/it]
2024-07-11 22:32:24 | ERROR | stderr | 
Loading checkpoint shards:  24%|██▎       | 4/17 [00:07<00:26,  2.01s/it]
2024-07-11 22:32:26 | ERROR | stderr | 
Loading checkpoint shards:  29%|██▉       | 5/17 [00:10<00:25,  2.09s/it]
2024-07-11 22:32:28 | ERROR | stderr | 
Loading checkpoint shards:  35%|███▌      | 6/17 [00:12<00:22,  2.07s/it]
2024-07-11 22:32:30 | ERROR | stderr | 
Loading checkpoint shards:  41%|████      | 7/17 [00:14<00:20,  2.07s/it]
2024-07-11 22:32:32 | ERROR | stderr | 
Loading checkpoint shards:  47%|████▋     | 8/17 [00:16<00:18,  2.09s/it]
2024-07-11 22:32:34 | ERROR | stderr | 
Loading checkpoint shards:  53%|█████▎    | 9/17 [00:18<00:16,  2.06s/it]
2024-07-11 22:32:36 | ERROR | stderr | 
Loading checkpoint shards:  59%|█████▉    | 10/17 [00:20<00:14,  2.04s/it]
2024-07-11 22:32:38 | ERROR | stderr | 
Loading checkpoint shards:  65%|██████▍   | 11/17 [00:22<00:12,  2.05s/it]
2024-07-11 22:32:40 | ERROR | stderr | 
Loading checkpoint shards:  71%|███████   | 12/17 [00:24<00:10,  2.02s/it]
2024-07-11 22:32:43 | ERROR | stderr | 
Loading checkpoint shards:  76%|███████▋  | 13/17 [00:27<00:08,  2.22s/it]
2024-07-11 22:32:45 | ERROR | stderr | 
Loading checkpoint shards:  82%|████████▏ | 14/17 [00:29<00:06,  2.20s/it]
2024-07-11 22:32:48 | ERROR | stderr | 
Loading checkpoint shards:  88%|████████▊ | 15/17 [00:32<00:04,  2.39s/it]
2024-07-11 22:32:50 | ERROR | stderr | 
Loading checkpoint shards:  94%|█████████▍| 16/17 [00:34<00:02,  2.26s/it]
2024-07-11 22:32:51 | ERROR | stderr | 
Loading checkpoint shards: 100%|██████████| 17/17 [00:35<00:00,  1.93s/it]
2024-07-11 22:32:51 | ERROR | stderr | 
Loading checkpoint shards: 100%|██████████| 17/17 [00:35<00:00,  2.07s/it]
2024-07-11 22:32:51 | ERROR | stderr | 
2024-07-11 22:32:52 | INFO | model_worker | Register to controller
2024-07-11 22:32:52 | ERROR | stderr | INFO:     Started server process [86159]
2024-07-11 22:32:52 | ERROR | stderr | INFO:     Waiting for application startup.
2024-07-11 22:32:52 | ERROR | stderr | INFO:     Application startup complete.
2024-07-11 22:32:52 | ERROR | stderr | INFO:     Uvicorn running on http://0.0.0.0:40006 (Press CTRL+C to quit)
2024-07-11 22:33:07 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: None. global_counter: 0
2024-07-11 22:33:11 | INFO | stdout | INFO:     10.140.60.209:48660 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:33:13 | INFO | stdout | INFO:     10.140.60.209:48714 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:33:13 | INFO | stdout | INFO:     10.140.60.209:48736 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:33:14 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1
2024-07-11 22:33:14 | INFO | stdout | INFO:     10.140.60.209:48756 - "POST /worker_generate_stream HTTP/1.1" 200 OK
2024-07-11 22:33:14 | INFO | model_worker | max_input_tile_list: [12]
2024-07-11 22:33:14 | INFO | model_worker | Split images to torch.Size([13, 3, 448, 448])
2024-07-11 22:33:14 | INFO | model_worker | []
2024-07-11 22:33:14 | INFO | model_worker | Generation config: {'num_beams': 1, 'max_new_tokens': 2048, 'do_sample': True, 'temperature': 0.8, 'repetition_penalty': 1.1, 'max_length': 8192, 'top_p': 0.7, 'streamer': <transformers.generation.streamers.TextIteratorStreamer object at 0x7f3dd8422530>}
2024-07-11 22:33:15 | ERROR | stderr | /mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:397: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
2024-07-11 22:33:15 | ERROR | stderr |   warnings.warn(
2024-07-11 22:33:15 | WARNING | transformers.generation.utils | Both `max_new_tokens` (=2048) and `max_length`(=8192) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
2024-07-11 22:33:18 | ERROR | stderr | Exception in thread Thread-3 (chat):
2024-07-11 22:33:18 | ERROR | stderr | Traceback (most recent call last):
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
2024-07-11 22:33:18 | ERROR | stderr |     self.run()
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/threading.py", line 946, in run
2024-07-11 22:33:18 | ERROR | stderr |     self._target(*self._args, **self._kwargs)
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-40B/modeling_internvl_chat.py", line 280, in chat
2024-07-11 22:33:18 | ERROR | stderr |     generation_output = self.generate(
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
2024-07-11 22:33:18 | ERROR | stderr |     return func(*args, **kwargs)
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/.cache/huggingface/modules/transformers_modules/InternVL2-40B/modeling_internvl_chat.py", line 330, in generate
2024-07-11 22:33:18 | ERROR | stderr |     outputs = self.language_model.generate(
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
2024-07-11 22:33:18 | ERROR | stderr |     return func(*args, **kwargs)
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/utils.py", line 1525, in generate
2024-07-11 22:33:18 | ERROR | stderr |     return self.sample(
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/utils.py", line 2641, in sample
2024-07-11 22:33:18 | ERROR | stderr |     next_token_scores = logits_processor(input_ids, next_token_logits)
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 97, in __call__
2024-07-11 22:33:18 | ERROR | stderr |     scores = processor(input_ids, scores)
2024-07-11 22:33:18 | ERROR | stderr |   File "/mnt/petrelfs/wangweiyun/miniconda3/envs/internvl-apex/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 333, in __call__
2024-07-11 22:33:18 | ERROR | stderr |     score = torch.gather(scores, 1, input_ids)
2024-07-11 22:33:18 | ERROR | stderr | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper_CUDA_gather)
2024-07-11 22:33:22 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1
2024-07-11 22:33:24 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:33:37 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:33:52 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:34:02 | INFO | stdout | INFO:     10.140.60.209:49124 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:34:04 | INFO | stdout | INFO:     10.140.60.209:49144 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:34:05 | INFO | stdout | INFO:     10.140.60.209:49165 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:34:07 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:34:22 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:34:37 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:34:52 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:35:07 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:35:22 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:35:37 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:35:52 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:36:07 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:36:22 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:36:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:36:43 | INFO | stdout | INFO:     10.140.60.209:50210 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:36:49 | INFO | stdout | INFO:     10.140.60.209:50228 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:36:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:36:57 | INFO | stdout | INFO:     10.140.60.209:50330 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:37:00 | INFO | stdout | INFO:     10.140.60.209:50354 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:37:01 | INFO | stdout | INFO:     10.140.60.209:50374 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:37:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:37:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:37:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:37:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:38:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:38:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:38:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:38:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:39:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:39:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:39:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:39:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:40:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:40:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:40:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:40:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:40:59 | INFO | stdout | INFO:     10.140.60.209:52238 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:41:01 | INFO | stdout | INFO:     10.140.60.209:52258 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:41:01 | INFO | stdout | INFO:     10.140.60.209:52278 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:41:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:41:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:41:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:41:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:42:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:42:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:42:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:42:53 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:43:07 | INFO | stdout | INFO:     10.140.60.209:52936 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:43:08 | INFO | stdout | INFO:     10.140.60.209:52956 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:43:08 | INFO | stdout | INFO:     10.140.60.209:52976 - "POST /worker_get_status HTTP/1.1" 200 OK
2024-07-11 22:43:08 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:43:23 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:43:38 | INFO | model_worker | Send heart beat. Models: ['InternVL2-40B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2024-07-11 22:43:39 | ERROR | stderr | INFO:     Shutting down