Edit model card

This model has been developed by KAIST ALIN Lab and OMNIOUS.AI - HyunseokLee, TaeyoungKim

Input Models input text only.

Output Models generate text only.

Model Architecture
ko-en-llama2-13b-aligned is an auto-regressive language model based on the LLaMA2 transformer architecture.

Base Model
hyunseoki/ko-en-llama2-13b

Training Dataset
Open dataset wiki and AIhub (English + Korean). Supervised Finetuned with Instruction Dataset and aligned with Human Preference Dataset using DPO.

Downloads last month
60
Safetensors
Model size
13B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.