runtime error
u may switch to Standalone mode for such cases. '(ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 028fd844-246b-4b0f-a08b-7c764998a674)')' thrown while requesting HEAD https://huggingface.co/tareknaous/bert2bert-empathetic-response-msa/resolve/main/tokenizer_config.json Downloading config.json: 0%| | 0.00/4.41k [00:00<?, ?B/s] Downloading config.json: 100%|██████████| 4.41k/4.41k [00:00<00:00, 835kB/s] Downloading tokenizer_config.json: 0%| | 0.00/626 [00:00<?, ?B/s] Downloading tokenizer_config.json: 100%|██████████| 626/626 [00:00<00:00, 594kB/s] Downloading vocab.txt: 0%| | 0.00/653k [00:00<?, ?B/s] Downloading vocab.txt: 100%|██████████| 653k/653k [00:00<00:00, 70.2MB/s] Downloading (…)cial_tokens_map.json: 0%| | 0.00/112 [00:00<?, ?B/s] Downloading (…)cial_tokens_map.json: 100%|██████████| 112/112 [00:00<00:00, 98.2kB/s] '(ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: f288dd5a-c937-4f8d-8657-14ed2fbdff64)')' thrown while requesting HEAD https://huggingface.co/tareknaous/bert2bert-empathetic-response-msa/resolve/main/pytorch_model.bin Traceback (most recent call last): File "app.py", line 17, in <module> model = EncoderDecoderModel.from_pretrained("tareknaous/bert2bert-empathetic-response-msa") File "/home/user/.local/lib/python3.8/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 387, in from_pretrained return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2970, in from_pretrained raise EnvironmentError( OSError: tareknaous/bert2bert-empathetic-response-msa does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
Container logs:
Fetching error logs...