runtime error

ng (…)ored.ggmlv3.q8_0.bin: 98%|█████████▊| 6.99G/7.16G [02:08<00:03, 55.4MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 98%|█████████▊| 7.01G/7.16G [02:09<00:02, 54.3MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 98%|█████████▊| 7.03G/7.16G [02:09<00:02, 54.9MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 98%|█████████▊| 7.05G/7.16G [02:09<00:01, 64.2MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 99%|█████████▊| 7.06G/7.16G [02:10<00:02, 50.6MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 99%|█████████▉| 7.08G/7.16G [02:10<00:01, 53.3MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 99%|█████████▉| 7.09G/7.16G [02:10<00:01, 52.6MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 99%|█████████▉| 7.11G/7.16G [02:11<00:00, 57.2MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 99%|█████████▉| 7.12G/7.16G [02:11<00:00, 62.8MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 100%|█████████▉| 7.13G/7.16G [02:11<00:00, 57.9MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 100%|█████████▉| 7.14G/7.16G [02:11<00:00, 52.2MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 100%|██████████| 7.16G/7.16G [02:11<00:00, 53.7MB/s] Downloading (…)ored.ggmlv3.q8_0.bin: 100%|██████████| 7.16G/7.16G [02:11<00:00, 54.3MB/s] gguf_init_from_file: invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--TheBloke--llama2_7b_chat_uncensored-GGML/snapshots/119077c338e1abaaea44d24d122315c77392fca4/llama2_7b_chat_uncensored.ggmlv3.q8_0.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 22, in <module> lcpp_llm = Llama( File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 323, in __init__ assert self.model is not None AssertionError

Container logs:

Fetching error logs...