Please support phi-3.5-mini-instruct in llama.cpp
#20
by
ThiloteE
- opened
See Bug: phi 3.5 mini produces garbage past 4096 context #9127
If you want people to use your model, please support llama.cpp.
Many many users use GGUF, as they are small and fast.
As it stands right now, I cannot recommend this model to anybody. As an alternative, I only can point to the older Phi-3 model series, but not phi-3.5.