π© Integrate with fastChat ?
β docker build -t naison/fastchat:v1 -f /Users/users/Desktop/new/demo/LLM/fastchat.Dockerfile .
[+] Building 57.4s (20/20) FINISHED
=> [internal] load build definition from fastchat.Dockerfile 0.1s
=> => transferring dockerfile: 1.18kB 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 0.0s
=> [ 1/16] FROM docker.io/nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 9.53kB 0.0s
=> CACHED [ 2/16] WORKDIR /app/FastChat 0.0s
=> CACHED [ 3/16] RUN apt-get update 0.0s
=> CACHED [ 4/16] RUN apt-get install -y software-properties-common 0.0s
=> CACHED [ 5/16] RUN apt-get update 0.0s
=> CACHED [ 6/16] RUN apt-get install -y git 0.0s
=> CACHED [ 7/16] RUN apt-get install -y python3.11 python3-pip 0.0s
=> CACHED [ 8/16] RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh 0.0s
=> CACHED [ 9/16] RUN git clone https://github.com/lm-sys/FastChat.git . 0.0s
=> CACHED [10/16] RUN pip3 install --upgrade pip 0.0s
=> CACHED [11/16] RUN pip3 install -e ".[model_worker,webui]" 0.0s
=> CACHED [12/16] RUN pip3 install accelerate 0.0s
=> CACHED [13/16] RUN pip3 install transformers 0.0s
=> CACHED [14/16] RUN mkdir -p /app/models/output 0.0s
=> [15/16] COPY replit-code-v1-3b /app/models/replit-code-v1-3b 51.0s
=> ERROR [16/16] RUN python3 -m fastchat.serve.cli --model /app/models/replit-code-v1-3b --debug 6.1s
[16/16] RUN python3 -m fastchat.serve.cli --model /app/models/replit-code-v1-3b --debug:
#20 5.673 Traceback (most recent call last):
#20 5.673 File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
#20 5.673 return _run_code(code, main_globals, None,
#20 5.673 File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
#20 5.673 exec(code, run_globals)
#20 5.673 File "/app/FastChat/fastchat/serve/cli.py", line 291, in
#20 5.673 main(args)
#20 5.673 File "/app/FastChat/fastchat/serve/cli.py", line 215, in main
#20 5.673 chat_loop(
#20 5.673 File "/app/FastChat/fastchat/serve/inference.py", line 313, in chat_loop
#20 5.673 model, tokenizer = load_model(
#20 5.673 File "/app/FastChat/fastchat/model/model_adapter.py", line 301, in load_model
#20 5.673 model, tokenizer = adapter.load_model(model_path, kwargs)
#20 5.673 File "/app/FastChat/fastchat/model/model_adapter.py", line 70, in load_model
#20 5.673 tokenizer = AutoTokenizer.from_pretrained(
#20 5.673 File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/tokenization_auto.py", line 738, in from_pretrained
#20 5.675 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
#20 5.675 File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2017, in from_pretrained
#20 5.678 return cls._from_pretrained(
#20 5.678 File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2249, in _from_pretrained
#20 5.679 tokenizer = cls(*init_inputs, **init_kwargs)
#20 5.679 File "/root/.cache/huggingface/modules/transformers_modules/replit-code-v1-3b/replit_lm_tokenizer.py", line 66, in init
#20 5.679 super().init(bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, pad_token=pad_token, sep_token=sep_token, sp_model_kwargs=self.sp_model_kwargs, **kwargs)
#20 5.679 File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py", line 367, in init
#20 5.681 self._add_tokens(
#20 5.681 File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py", line 467, in _add_tokens
#20 5.682 current_vocab = self.get_vocab().copy()
#20 5.682 File "/root/.cache/huggingface/modules/transformers_modules/replit-code-v1-3b/replit_lm_tokenizer.py", line 76, in get_vocab
#20 5.682 vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
#20 5.682 File "/root/.cache/huggingface/modules/transformers_modules/replit-code-v1-3b/replit_lm_tokenizer.py", line 73, in vocab_size
#20 5.682 return self.sp_model.get_piece_size()
#20 5.682 AttributeError: 'ReplitLMTokenizer' object has no attribute 'sp_model'
executor failed running [/bin/sh -c python3 -m fastchat.serve.cli --model /app/models/replit-code-v1-3b --debug]: exit code: 1
replitLM cann't access via fastchat?