runtime error
Exit code: 1. Reason: config.json: 0%| | 0.00/724 [00:00<?, ?B/s][A config.json: 100%|ββββββββββ| 724/724 [00:00<00:00, 6.22MB/s] vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/508M [00:00<?, ?B/s][A vae/diffusion_pytorch_model.safetensors: 47%|βββββ | 239M/508M [00:01<00:01, 233MB/s][A vae/diffusion_pytorch_model.safetensors: 100%|ββββββββββ| 508M/508M [00:01<00:00, 437MB/s] Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s][A Loading pipeline components...: 67%|βββββββ | 4/6 [00:01<00:00, 3.95it/s][A`torch_dtype` is deprecated! Use `dtype` instead! Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:00<00:00, 10.99it/s] Loading pipeline components...: 100%|ββββββββββ| 6/6 [00:01<00:00, 3.94it/s] Optimizing pipeline... Waiting for a GPU to become available Successfully acquired a GPU SPACES_ZERO_GPU_DEBUG self.arg_queue._writer.fileno()=5 SPACES_ZERO_GPU_DEBUG self.res_queue._writer.fileno()=11 Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 143, in worker_init torch.init(nvidia_uuid) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 400, in init torch.Tensor([0]).cuda() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available Traceback (most recent call last): File "/home/user/app/app.py", line 91, in <module> optimize_pipeline_(pipe, File "/home/user/app/optimization.py", line 125, in optimize_pipeline_ compiled_transformer_1, compiled_transformer_2 = compile_transformer() File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 224, in gradio_handler raise error("ZeroGPU worker error", res.error_cls) gradio.exceptions.Error: 'RuntimeError'
Container logs:
Fetching error logs...