antalvdb's picture
Update README.md
19633b4
|
raw
history blame
No virus
2.99 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: bart-base-spelling-nl
    results: []

bart-base-spelling-nl

This model is a Dutch fine-tuned version of facebook/bart-base.

It achieves the following results on the evaluation set:

  • Loss: 0.0276
  • Cer: 0.0147

Model description

This is a text-to-text fine-tuned version of facebook/bart-base trained on spelling correction. It leans on the excellent work by Oliver Guhr (github, huggingface). Training was performed on an AWS EC2 instance (g5.xlarge) on a single GPU in about 4 hours.

Intended uses & limitations

The intended use for this model is to be a component of the Valkuil.net context-sensitive spelling checker. A next version of the model will be trained on more data.

Training and evaluation data

The model was trained on a Dutch dataset composed of 300,000 lines of text from three public Dutch sources, downloaded from the Opus corpus:

  • nl-europarlv7.100k.txt
  • nl-opensubtitles2016.100k.txt
  • nl-wikipedia.100k.txt

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 2
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss Cer
0.1617 0.11 1000 0.0986 0.9241
0.1326 0.21 2000 0.0676 0.9240
0.09 0.32 3000 0.0586 0.9241
0.0891 0.43 4000 0.0530 0.9240
0.0753 0.54 5000 0.0491 0.9239
0.069 0.64 6000 0.0459 0.9238
0.0615 0.75 7000 0.0435 0.9238
0.0494 0.86 8000 0.0409 0.9237
0.0671 0.97 9000 0.0388 0.9238
0.0425 1.07 10000 0.0367 0.9237
0.0394 1.18 11000 0.0356 0.9237
0.0399 1.29 12000 0.0344 0.9236
0.0375 1.4 13000 0.0333 0.9235
0.0409 1.5 14000 0.0315 0.9237
0.0291 1.61 15000 0.0304 0.9236
0.0268 1.72 16000 0.0293 0.9236
0.0309 1.83 17000 0.0284 0.9235
0.0362 1.93 18000 0.0276 0.9235

Framework versions

  • Transformers 4.27.3
  • Pytorch 2.0.0+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2