runtime error

Exit code: 1. Reason: ake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch. @torch.library.impl_abstract("xformers_flash::flash_fwd") /usr/local/lib/python3.10/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch. @torch.library.impl_abstract("xformers_flash::flash_bwd") The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Loading pipeline components...: 29%|██▊ | 2/7 [00:01<00:04, 1.16it/s] Loading pipeline components...: 57%|█████▋ | 4/7 [00:11<00:09, 3.12s/it]The config attributes {'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'beta_start': 0.00085, 'clip_sample': False, 'interpolation_type': 'linear', 'set_alpha_to_one': False, 'skip_prk_steps': True, 'steps_offset': 1, 'timestep_spacing': 'leading', 'trained_betas': None, 'use_karras_sigmas': False} were passed to EDMDPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Loading pipeline components...: 86%|████████▌ | 6/7 [00:17<00:03, 3.08s/it] Loading pipeline components...: 100%|██████████| 7/7 [00:17<00:00, 2.47s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 19, in <module> pipeline.load_lora_weights(model_repo_id) File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 542, in load_lora_weights raise ValueError("PEFT backend is required for this method.") ValueError: PEFT backend is required for this method.

Container logs:

Fetching error logs...