File size: 10,395 Bytes
2459f54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm gen_config ../dist/models/gorilla-openfunctions-v2 --quantization q8f32_1 --conv-template gorilla --output /tmp/tmpfb6fqbz7
[2024-03-18 21:03:38] INFO auto_config.py:115: Found model configuration: ../dist/models/gorilla-openfunctions-v2/config.json
[2024-03-18 21:03:38] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override.
[2024-03-18 21:03:38] INFO llama_model.py:52: context_window_size not found in config.json. Falling back to max_position_embeddings (4096)
[2024-03-18 21:03:38] INFO llama_model.py:72: prefill_chunk_size defaults to context_window_size (4096)
[2024-03-18 21:03:38] INFO config.py:106: Overriding max_batch_size from 1 to 80
[2024-03-18 21:03:38] INFO gen_config.py:133: [generation_config.json] Setting bos_token_id: 100000
[2024-03-18 21:03:38] INFO gen_config.py:133: [generation_config.json] Setting eos_token_id: 100015
[2024-03-18 21:03:38] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/gorilla-openfunctions-v2/tokenizer.model
[2024-03-18 21:03:38] INFO gen_config.py:145: Found tokenizer config: ../dist/models/gorilla-openfunctions-v2/tokenizer.json. Copying to /tmp/tmpfb6fqbz7/tokenizer.json
[2024-03-18 21:03:38] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/gorilla-openfunctions-v2/vocab.json
[2024-03-18 21:03:38] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/gorilla-openfunctions-v2/merges.txt
[2024-03-18 21:03:38] INFO gen_config.py:147: Not found tokenizer config: ../dist/models/gorilla-openfunctions-v2/added_tokens.json
[2024-03-18 21:03:38] INFO gen_config.py:145: Found tokenizer config: ../dist/models/gorilla-openfunctions-v2/tokenizer_config.json. Copying to /tmp/tmpfb6fqbz7/tokenizer_config.json
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting pad_token_id: 0
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting temperature: 0.7
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting presence_penalty: 0.0
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting frequency_penalty: 0.0
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting repetition_penalty: 1.0
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting top_p: 0.95
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting mean_gen_len: 128
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting max_gen_len: 512
[2024-03-18 21:03:38] INFO gen_config.py:75: [System default] Setting shift_fill_factor: 0.3
[2024-03-18 21:03:38] INFO gen_config.py:198: Dumping configuration file to: /tmp/tmpfb6fqbz7/mlc-chat-config.json
/home/floriadmin/miniforge3/envs/mlc/bin/python -m mlc_llm convert_weight ../dist/models/gorilla-openfunctions-v2 --quantization q8f32_1 --source-format auto --output /tmp/tmpfb6fqbz7
[2024-03-18 21:03:39] INFO auto_config.py:115: Found model configuration: ../dist/models/gorilla-openfunctions-v2/config.json
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:0
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:1
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:2
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:3
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:4
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:5
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:6
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:7
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:8
[2024-03-18 21:03:39] INFO auto_device.py:76: Found device: cuda:9
[2024-03-18 21:03:40] INFO auto_device.py:85: Not found device: rocm:0
[2024-03-18 21:03:41] INFO auto_device.py:85: Not found device: metal:0
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:0
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:1
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:2
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:3
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:4
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:5
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:6
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:7
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:8
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:9
[2024-03-18 21:03:44] INFO auto_device.py:76: Found device: vulkan:10
[2024-03-18 21:03:45] INFO auto_device.py:85: Not found device: opencl:0
[2024-03-18 21:03:45] INFO auto_device.py:33: Using device: cuda:0
[2024-03-18 21:03:45] INFO auto_weight.py:70: Finding weights in: ../dist/models/gorilla-openfunctions-v2
[2024-03-18 21:03:45] INFO auto_weight.py:120: Found source weight format: huggingface-torch. Source configuration: ../dist/models/gorilla-openfunctions-v2/pytorch_model.bin.index.json
[2024-03-18 21:03:45] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: ../dist/models/gorilla-openfunctions-v2/model.safetensors.index.json
[2024-03-18 21:03:45] INFO auto_weight.py:106: Using source weight configuration: ../dist/models/gorilla-openfunctions-v2/pytorch_model.bin.index.json. Use `--source` to override.
[2024-03-18 21:03:45] INFO auto_weight.py:110: Using source weight format: huggingface-torch. Use `--source-format` to override.
[2024-03-18 21:03:45] INFO auto_config.py:153: Found model type: llama. Use `--model-type` to override.
[2024-03-18 21:03:45] INFO llama_model.py:52: context_window_size not found in config.json. Falling back to max_position_embeddings (4096)
[2024-03-18 21:03:45] INFO llama_model.py:72: prefill_chunk_size defaults to context_window_size (4096)
Weight conversion with arguments:
  --config          ../dist/models/gorilla-openfunctions-v2/config.json
  --quantization    GroupQuantize(name='q8f32_1', kind='group-quant', group_size=32, quantize_dtype='int8', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', quantize_embedding=True, quantize_final_fc=True, num_elem_per_storage=4, num_storage_per_group=8, max_int_value=127)
  --model-type      llama
  --device          cuda:0
  --source          ../dist/models/gorilla-openfunctions-v2/pytorch_model.bin.index.json
  --source-format   huggingface-torch
  --output          /tmp/tmpfb6fqbz7
Start storing to cache /tmp/tmpfb6fqbz7

  0%|                                                                                                    | 0/183 [00:00<?, ?it/s]
                                                                                                                                 
[2024-03-18 21:03:47] INFO huggingface_loader.py:182: Loading HF parameters from: ../dist/models/gorilla-openfunctions-v2/pytorch_model-00002-of-00002.bin

  0%|                                                                                                    | 0/183 [00:00<?, ?it/s]
  0%|                                                                                                    | 0/183 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/floriadmin/mlc-llm/python/mlc_llm/__main__.py", line 47, in <module>
    main()
  File "/home/floriadmin/mlc-llm/python/mlc_llm/__main__.py", line 28, in main
    cli.main(sys.argv[2:])
  File "/home/floriadmin/mlc-llm/python/mlc_llm/cli/convert_weight.py", line 87, in main
    convert_weight(
  File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 182, in convert_weight
    _convert_args(args)
  File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 146, in _convert_args
    tvmjs.dump_ndarray_cache(
  File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/tvm/contrib/tvmjs.py", line 210, in dump_ndarray_cache
    for k, origin_v in param_generator:
  File "/home/floriadmin/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 130, in _param_generator
    for name, param in loader.load(device=args.device, preshard_funcs=preshard_funcs):
  File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 117, in load
    param = self._load_mlc_param(mlc_name, device=device)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 147, in _load_mlc_param
    self._load_file(path)
  File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/huggingface_loader.py", line 186, in _load_file
    for name, param in load_func(path):
  File "/home/floriadmin/mlc-llm/python/mlc_llm/loader/utils.py", line 42, in load_torch_shard
    for name, param in torch.load(path, map_location=torch.device("cpu")).items():
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 998, in load
    with _open_file_like(f, 'rb') as opened_file:
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 445, in _open_file_like
    return _open_file(name_or_buffer, mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/floriadmin/miniforge3/envs/mlc/lib/python3.11/site-packages/torch/serialization.py", line 426, in __init__
    super().__init__(open(name, mode))
                     ^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '../dist/models/gorilla-openfunctions-v2/pytorch_model-00002-of-00002.bin'