File size: 2,577 Bytes
dc4ebd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b582de
 
 
 
 
 
dc4ebd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
base_model: h2oai/h2o-danube2-1.8b-base
datasets:
- migtissera/Tess-v1.5
language:
- en
library_name: transformers
tags:
- llama-factory
- unsloth
---
# h2o-danube2 with ChatML template

This model was first fine-tuned with [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") on [migtissera/Tess-v1.5](https://huggingface.co./datasets/migtissera/Tess-v1.5) using LLama-Factory.

## Quants

Thanks to [mradermacher](https://huggingface.co./mradermacher) for this!

- [mradermacher/danube2-1.8b-Tess-v1.5-GGUF](https://huggingface.co./mradermacher/danube2-1.8b-Tess-v1.5-GGUF)

## Template

```jinja
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
```

## BAdam config

```yaml
### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 6
seed: 720

### dataset
dataset: tess15
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: tess15-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 0.00001
num_train_epochs: 1
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000
```

### BAdam training results

| Training Loss | Epoch  | Step  | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.8017        | 0.0643 | 1000  | 0.6820          |
| 0.6167        | 0.1287 | 2000  | 0.6610          |
| 0.6161        | 0.1930 | 3000  | 0.6496          |
| 0.6322        | 0.2574 | 4000  | 0.6423          |
| 0.5127        | 0.3217 | 5000  | 0.6366          |
| 0.61          | 0.3860 | 6000  | 0.6312          |
| 0.6758        | 0.4504 | 7000  | 0.6266          |
| 0.5901        | 0.5147 | 8000  | 0.6215          |
| 0.5163        | 0.5791 | 9000  | 0.6197          |
| 0.6043        | 0.6434 | 10000 | 0.6175          |
| 0.5056        | 0.7077 | 11000 | 0.6153          |
| 0.5772        | 0.7721 | 12000 | 0.6126          |
| 0.6692        | 0.8364 | 13000 | 0.6107          |
| 0.5262        | 0.9008 | 14000 | 0.6066          |
| 0.6386        | 0.9651 | 15000 | 0.6056          |