nrshoudi commited on
Commit
409d21e
1 Parent(s): 528a6f3

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -38
README.md CHANGED
@@ -12,10 +12,10 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # wav2vec2-large-xls-r-300m-Arabic-phoneme
14
 
15
- This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.0131
18
- - Cer: 0.0698
19
 
20
  ## Model description
21
 
@@ -35,11 +35,11 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0005
38
- - train_batch_size: 6
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 24
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 250
@@ -48,43 +48,43 @@ The following hyperparameters were used during training:
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss | Cer |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|
53
- | 3.7128 | 1.0 | 136 | 3.4914 | 0.9731 |
54
- | 2.99 | 2.0 | 272 | 3.2392 | 0.9731 |
55
- | 1.8777 | 3.0 | 408 | 1.0871 | 0.4466 |
56
- | 0.5067 | 4.0 | 544 | 0.2068 | 0.1338 |
57
- | 0.1711 | 5.0 | 680 | 0.0875 | 0.0913 |
58
- | 0.0905 | 6.0 | 816 | 0.0734 | 0.0843 |
59
- | 0.0712 | 7.0 | 952 | 0.0433 | 0.0770 |
60
- | 0.0535 | 8.0 | 1088 | 0.0314 | 0.0733 |
61
- | 0.0502 | 9.0 | 1224 | 0.0345 | 0.0752 |
62
- | 0.0501 | 10.0 | 1360 | 0.0265 | 0.0741 |
63
- | 0.0446 | 11.0 | 1496 | 0.0326 | 0.0741 |
64
- | 0.0348 | 12.0 | 1632 | 0.0340 | 0.0763 |
65
- | 0.046 | 13.0 | 1768 | 0.0250 | 0.0735 |
66
- | 0.0316 | 14.0 | 1904 | 0.0480 | 0.0860 |
67
- | 0.0198 | 15.0 | 2040 | 0.0267 | 0.0736 |
68
- | 0.0209 | 16.0 | 2176 | 0.0173 | 0.0713 |
69
- | 0.0131 | 17.0 | 2312 | 0.0204 | 0.0714 |
70
- | 0.0184 | 18.0 | 2448 | 0.0183 | 0.0707 |
71
- | 0.0136 | 19.0 | 2584 | 0.0245 | 0.0717 |
72
- | 0.0165 | 20.0 | 2720 | 0.0200 | 0.0737 |
73
- | 0.012 | 21.0 | 2856 | 0.0152 | 0.0703 |
74
- | 0.0124 | 22.0 | 2992 | 0.0149 | 0.0703 |
75
- | 0.0118 | 23.0 | 3128 | 0.0168 | 0.0711 |
76
- | 0.0087 | 24.0 | 3264 | 0.0139 | 0.0702 |
77
- | 0.0077 | 25.0 | 3400 | 0.0128 | 0.0696 |
78
- | 0.0103 | 26.0 | 3536 | 0.0121 | 0.0696 |
79
- | 0.0059 | 27.0 | 3672 | 0.0131 | 0.0700 |
80
- | 0.0058 | 28.0 | 3808 | 0.0129 | 0.0699 |
81
- | 0.0067 | 29.0 | 3944 | 0.0132 | 0.0700 |
82
- | 0.0055 | 30.0 | 4080 | 0.0131 | 0.0698 |
83
 
84
 
85
  ### Framework versions
86
 
87
- - Transformers 4.26.1
88
  - Pytorch 1.13.1+cu116
89
- - Datasets 2.10.1
90
  - Tokenizers 0.13.2
 
12
 
13
  # wav2vec2-large-xls-r-300m-Arabic-phoneme
14
 
15
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.8176
18
+ - Per: 0.9118
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0005
38
+ - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 250
 
48
 
49
  ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss | Per |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|
53
+ | 2.4826 | 1.0 | 102 | 2.2995 | 1.0 |
54
+ | 2.2578 | 2.0 | 204 | 2.3180 | 1.0 |
55
+ | 2.2646 | 2.99 | 306 | 2.2911 | 1.0 |
56
+ | 2.2626 | 4.0 | 409 | 2.2801 | 1.0 |
57
+ | 2.2365 | 5.0 | 511 | 2.2799 | 1.0 |
58
+ | 1.8441 | 6.0 | 613 | 1.8658 | 1.0 |
59
+ | 1.71 | 6.99 | 715 | 1.7148 | 1.0 |
60
+ | 1.707 | 8.0 | 818 | 1.7154 | 1.0 |
61
+ | 1.7272 | 9.0 | 920 | 1.7486 | 1.0 |
62
+ | 1.7125 | 10.0 | 1022 | 1.7092 | 1.0 |
63
+ | 1.6913 | 10.99 | 1124 | 1.7012 | 1.0 |
64
+ | 1.6803 | 12.0 | 1227 | 1.6967 | 1.0 |
65
+ | 1.6757 | 13.0 | 1329 | 1.6782 | 1.0 |
66
+ | 1.6585 | 14.0 | 1431 | 1.6595 | 1.0 |
67
+ | 1.6505 | 14.99 | 1533 | 1.6567 | 1.0 |
68
+ | 1.6389 | 16.0 | 1636 | 1.6378 | 1.0 |
69
+ | 1.626 | 17.0 | 1738 | 1.6214 | 1.0 |
70
+ | 1.606 | 18.0 | 1840 | 1.5867 | 1.0 |
71
+ | 1.5881 | 18.99 | 1942 | 1.5389 | 0.9920 |
72
+ | 1.5545 | 20.0 | 2045 | 1.5063 | 0.9909 |
73
+ | 1.5236 | 21.0 | 2147 | 1.4664 | 0.9856 |
74
+ | 1.483 | 22.0 | 2249 | 1.4135 | 0.9759 |
75
+ | 1.4182 | 22.99 | 2351 | 1.3585 | 0.9644 |
76
+ | 1.3516 | 24.0 | 2454 | 1.2776 | 0.9696 |
77
+ | 1.2891 | 25.0 | 2556 | 1.1894 | 0.9601 |
78
+ | 1.2138 | 26.0 | 2658 | 1.0975 | 0.9484 |
79
+ | 1.1276 | 26.99 | 2760 | 1.0188 | 0.9178 |
80
+ | 1.0627 | 28.0 | 2863 | 0.9328 | 0.9226 |
81
+ | 0.9885 | 29.0 | 2965 | 0.8636 | 0.9103 |
82
+ | 0.9552 | 29.93 | 3060 | 0.8327 | 0.9102 |
83
 
84
 
85
  ### Framework versions
86
 
87
+ - Transformers 4.27.4
88
  - Pytorch 1.13.1+cu116
89
+ - Datasets 2.11.0
90
  - Tokenizers 0.13.2