runtime error

ptimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-02-10 02:20:10.654741: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT config.json: 0%| | 0.00/637 [00:00<?, ?B/s] config.json: 100%|██████████| 637/637 [00:00<00:00, 2.71MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 4, in <module> model = pipeline(task="text-generation", model="codellama/CodeLlama-7b-hf") File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 870, in pipeline framework, model = infer_framework_load_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 291, in infer_framework_load_model raise ValueError( ValueError: Could not load model codellama/CodeLlama-7b-hf with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForCausalLM'>,). See the original errors: while loading with TFAutoModelForCausalLM, an error is thrown: Traceback (most recent call last): File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 278, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 569, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.llama.configuration_llama.LlamaConfig'> for this kind of AutoModel: TFAutoModelForCausalLM. Model type should be one of BertConfig, CamembertConfig, CTRLConfig, GPT2Config, GPT2Config, GPTJConfig, OpenAIGPTConfig, OPTConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoFormerConfig, TransfoXLConfig, XGLMConfig, XLMConfig, XLMRobertaConfig, XLNetConfig.

Container logs:

Fetching error logs...