metadata
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
- Locutusque/Hercules-v3.0
language:
- en
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
min_new_tokens: 2
max_new_tokens: 250
repetition_penalty: 1.1
NeuralReyna-Mini-1.8B-v0.2
Description
Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the Intel/orca_dpo_pairs dataset.
This model has capabilities in coding, math, science, roleplay, and function calling.
This model was trained on OpenAI's ChatML prompt format.
Evaluation
Disclaimer
This model may have overfitted to the DPO training data, and may not perform well.
Contributions
Thanks to @aloobun and @Locutusque for their contributions to this model.