Edit model card

Dummy GPT2 for TRL testing

from transformers import AutoTokenizer, GPT2Config, GPT2LMHeadModel

config = GPT2Config(n_positions=512, n_embd=32, n_layer=5, n_head=4, n_inner=37, pad_token_id=1023, is_decoder=True)
model = GPT2LMHeadModel(config)
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")

model_id = "trl-internal-testing/dummy-GPT2-correct-vocab"
model.push_to_hub(model_id)
tokenizer.chat_template = "{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ '  ' }}{% endif %}{% endfor %}{{ eos_token }}"
tokenizer.push_to_hub(model_id)
config.push_to_hub(model_id)
Downloads last month
1,456,616
Safetensors
Model size
1.66M params
Tensor type
F32
·
Inference API
Input a message to start chatting with trl-internal-testing/dummy-GPT2-correct-vocab.