runtime error
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-34b-hf") File "/usr/local/lib/python3.10/site-packages/transformers/processing_utils.py", line 465, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/processing_utils.py", line 511, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2110, in from_pretrained return cls._from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2336, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 159, in __init__ super().__init__( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 105, in __init__ raise ValueError( ValueError: Cannot instantiate this tokenizer from a slow version. If it's based on sentencepiece, make sure you have sentencepiece installed.
Container logs:
Fetching error logs...