Edit model card

Hypernova-experimental

Quantized to GGUF using llama.cpp

Tried some new stuff this time around. Very different outcome than I expected. This is an experimental model that was created for the development of NovaAI.

Good at chatting and some RP. Sometimes gets characters mixed up. Can occasionally struggle with context.

Prompt Template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Models Merged

The following models were included in the merge:

Some finetuning done as well

Downloads last month
50
GGUF
Model size
13B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for theNovaAI/Hypernova-experimental-GGUF

Quantized
(12)
this model

Collection including theNovaAI/Hypernova-experimental-GGUF