REILX commited on
Commit
fec95ed
1 Parent(s): 9ed7e6f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -1,3 +1,42 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ datasets:
4
+ - silk-road/alpaca-data-gpt4-chinese
5
+ - TigerResearch/sft_zh
6
+ - LooksJuicy/ruozhiba
7
+ - leo009/alpaca-cleaned-zh-cn
8
+ - REILX/extracted_tagengo_gpt4
9
+ language:
10
+ - en
11
+ - zh
12
+ ---
13
+
14
+ ### 模型:
15
+ - https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
16
+
17
+ ### 数据集:
18
+ - https://huggingface.co/datasets/TigerResearch/sft_zh
19
+ - https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese
20
+ - https://huggingface.co/datasets/REILX/extracted_tagengo_gpt4
21
+ - https://huggingface.co/datasets/LooksJuicy/ruozhiba
22
+ - https://huggingface.co/datasets/leo009/alpaca-cleaned-zh-cn
23
+ (使用langid清理以上数据集,删除其中非中文资料)
24
+
25
+ ### 训练工具
26
+ https://github.com/hiyouga/LLaMA-Factory
27
+
28
+ ### Training hyperparameters
29
+ The following hyperparameters were used during training:
30
+ - learning_rate: 5e-05
31
+ - train_batch_size: 4
32
+ - eval_batch_size: 8
33
+ - seed: 42
34
+ - distributed_type: multi-GPU
35
+ - num_devices: 8
36
+ - gradient_accumulation_steps: 4
37
+ - total_train_batch_size: 128
38
+ - total_eval_batch_size: 64
39
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
+ - lr_scheduler_type: cosine
41
+ - lr_scheduler_warmup_ratio: 0.1
42
+ - num_epochs: 3.0