runtime error
Exit code: 1. Reason: "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run result = context.run(func, *args) File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 674, in wrapper response = f(*args, **kwargs) File "/home/user/app/app.py", line 74, in shot model_detections = detect_using_clip(image,prompts=prompts) File "/home/user/app/app.py", line 40, in detect_using_clip outputs = model(**inputs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/clipseg/modeling_clipseg.py", line 1436, in forward vision_outputs = self.clip.vision_model( File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/clipseg/modeling_clipseg.py", line 870, in forward hidden_states = self.embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/clipseg/modeling_clipseg.py", line 211, in forward raise ValueError( ValueError: Input image size (352*352) doesn't match model (224*224). IMPORTANT: You are using gradio version 4.7.1, however version 4.44.1 is available, please upgrade. --------
Container logs:
Fetching error logs...