GGFU

#1
by tomgm - opened

Your models is very interesting.
Is there a GGFU version of them?

Owner

Actually, it's easy to create a gguf model yourself.
It works:

# if the LLM model path = "~/text-generation-webui/models"
docker pull ghcr.io/ggerganov/llama.cpp:full
docker run -v ~/text-generation-webui/models:/models ghcr.io/ggerganov/llama.cpp:full --convert /models/nitky_Superswallow-70b-RP-v0.3
docker run -v ~/text-generation-webui/models:/models ghcr.io/ggerganov/llama.cpp:full --quantize /models/nitky_Superswallow-70b-RP-v0.3/ggml-model-f16.gguf /models/nitky_Superswallow-70b-RP-v0.3-Q4_K_M.gguf Q4_K_M

I'll upload it if I have time.

thank you.
I was able to convert it to gguf using llama.cpp.
I did not use the listed docker, but is there any difference?

Owner

This docker image is provided by the llama.cpp developers, so I believe the generated model makes no difference.
https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#images

I see. thank you.

tomgm changed discussion status to closed

Sign up or log in to comment