Edit model card

GGUF

gguf quants for ludis/tsukasa-13b-qlora-limarp

Prompting

https://rentry.org/tsukasa13b - reccomended prompts and gen settings

The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: <|system|>, <|user|> and <|model|>.

The <|system|> prompt can be used to inject out-of-channel information behind the scenes, while the <|user|> prompt should be used to indicate user input. The <|model|> token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.

Training

base model (mistral-0.1-7b)

axolotl was used for training on a 4x nvidia a40 gpu cluster.

the a40 GPU cluster has been graciously provided by Arc Compute.

rank 8 lora tune of mistralai/Mistral-7B-v0.1, first tuned on koishi commit 6e675d1 for one epoch then on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-30 for 2 epochs in metharme format

Downloads last month
82
GGUF
Model size
13B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Datasets used to train ludis/tsukasa-13b-qlora-limarp-gguf

Collection including ludis/tsukasa-13b-qlora-limarp-gguf