runtime error
Exit code: 1. Reason: tokenizer_config.json: 100%|██████████| 1.16M/1.16M [00:00<00:00, 87.5MB/s] Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. tokenizer.json: 0%| | 0.00/33.4M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 33.4M/33.4M [00:00<00:00, 54.2MB/s] added_tokens.json: 0%| | 0.00/63.0 [00:00<?, ?B/s][A added_tokens.json: 100%|██████████| 63.0/63.0 [00:00<00:00, 294kB/s] special_tokens_map.json: 0%| | 0.00/706 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 706/706 [00:00<00:00, 3.04MB/s] chat_template.jinja: 0%| | 0.00/13.8k [00:00<?, ?B/s][A chat_template.jinja: 100%|██████████| 13.8k/13.8k [00:00<00:00, 53.6MB/s] Traceback (most recent call last): File "/app/app.py", line 12, in <module> model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype="auto" ) File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 374, in from_pretrained return model_class.from_pretrained( ~~~~~~~~~~~~~~~~~~~~~~~~~~~^ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4001, in from_pretrained device_map = check_and_set_device_map(device_map) # warn, error and fix the device map File "/usr/local/lib/python3.13/site-packages/transformers/integrations/accelerate.py", line 134, in check_and_set_device_map raise ValueError( ...<2 lines>... ) ValueError: Using a `device_map`, `tp_plan`, `torch.device` context manager or setting `torch.set_default_device(device)` requires `accelerate`. You can install it with `pip install accelerate`
Container logs:
Fetching error logs...