Edit model card

I don't think GGUF version can use. Haven't test yet. But expect this is very good model.

Inixion-v2, My best MoErge model for now, have a capable of RP and general task. It can:

  • Understand well the character card
  • Smart model, so have very well logic
  • You can play multi character if you like
  • Could use RoPe scale to 16k context length, but hey, maybe it unstable.

Some still not good:

  • 8B model base, so although it 13B(24B?) parameter now with MoE merge, it still lack of knowledge and sometime cause trouble
  • Unexpected bug maybe occurs
  • Big model, so need at least 8GB vram to run, better use 12GB Vram to load GGUF version
  • It not too violence and toxic (maybe I need place this to good feature? Because not all everyone want toxic model w(゚Д゚)w )

GGUF?

https://huggingface.co./Alsebay/Inixion-2x8B-v2-GGUF

My own GGUF quantz, please use fixed version.

you can find other GGUF version in my friend https://huggingface.co./mradermacher

Here is imatrix GGUF version https://huggingface.co./mradermacher/Inixion-2x8B-v2-i1-GGUF

Forgot to tell the recipe:

base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9

source_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1

source_model: Sao10K/L3-8B-Stheno-v3.2

All is very smart model, so you could enjoy it

Downloads last month
4
Safetensors
Model size
13.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Alsebay/Inixion-2x8B-v2

Quantizations
2 models

Collection including Alsebay/Inixion-2x8B-v2