TheBloke commited on
Commit
6d3dd74
1 Parent(s): 9dc5119

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +152 -0
README.md CHANGED
@@ -1,3 +1,155 @@
1
  ---
2
  license: other
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ inference: false
4
  ---
5
+
6
+ # OpenAssistant LLaMA 30B SFT 7 GPTQ
7
+
8
+ This in a repo of GGML format models for [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
9
+
10
+ It is the result of merging the XORs from the above repo with the original Llama 30B weights, and then quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
11
+
12
+ This is epoch 7 of OpenAssistant's training of their Llama 30B model.
13
+
14
+ ## Repositories available
15
+
16
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ).
17
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML).
18
+ * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF).
19
+
20
+ ## PROMPT TEMPLATE
21
+
22
+ This model requires the following prompt template:
23
+
24
+ ```
25
+ <|prompter|> prompt goes here
26
+ <|assistant|>:
27
+ ```
28
+ ## How to easily download and use this model in text-generation-webui
29
+
30
+ Load text-generation-webui as you normally do.
31
+
32
+ 1. Click the **Model tab**.
33
+ 2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/stable-vicuna-13B-GPTQ`.
34
+ 3. Click **Download**.
35
+ 4. Wait until it says it's finished downloading.
36
+ 5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
37
+ 6. Now click the **Refresh** icon next to **Model** in the top left.
38
+ 7. In the **Model drop-down**: choose this model: `stable-vicuna-13B-GPTQ`.
39
+ 8. Click **Reload the Model** in the top right.
40
+ 9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
41
+
42
+ ## Provided files
43
+
44
+ I have uploaded two versions of the GPTQ.
45
+
46
+ **Compatible file - stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors**
47
+
48
+ In the `main` branch - the default one - you will find `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
49
+
50
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
51
+
52
+ It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
53
+
54
+ * `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
55
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
56
+ * Works with text-generation-webui one-click-installers
57
+ * Parameters: Groupsize = 128g. No act-order.
58
+ * Command used to create the GPTQ:
59
+ ```
60
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
61
+ ```
62
+
63
+ **Latest file - stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors**
64
+
65
+ Created for more recent versions of GPTQ-for-LLaMa, and uses the `--act-order` flag for maximum theoretical performance.
66
+
67
+ To access this file, please switch to the `latest` branch fo this repo and download from there.
68
+
69
+ * `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
70
+ * Only works with recent GPTQ-for-LLaMa code
71
+ * **Does not** work with text-generation-webui one-click-installers
72
+ * Parameters: Groupsize = 128g. **act-order**.
73
+ * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
74
+ * Command used to create the GPTQ:
75
+ ```
76
+ CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
77
+ ```
78
+
79
+ ## Manual instructions for `text-generation-webui`
80
+
81
+ File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
82
+
83
+ [Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
84
+
85
+ The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
86
+
87
+ If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
88
+ ```
89
+ # Clone text-generation-webui, if you don't already have it
90
+ git clone https://github.com/oobabooga/text-generation-webui
91
+ # Make a repositories directory
92
+ mkdir text-generation-webui/repositories
93
+ cd text-generation-webui/repositories
94
+ # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
95
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
96
+ ```
97
+
98
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
99
+ ```
100
+ cd text-generation-webui
101
+ python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
102
+ ```
103
+
104
+ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
105
+
106
+ If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
107
+
108
+ # Original model card
109
+
110
+ ```
111
+ llama-30b-sft-7:
112
+ dtype: fp16
113
+ log_dir: "llama_log_30b"
114
+ learning_rate: 1e-5
115
+ model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
116
+ #model_name: OpenAssistant/llama-30b-super-pretrain
117
+ output_dir: llama_model_30b
118
+ deepspeed_config: configs/zero3_config_sft.json
119
+ weight_decay: 0.0
120
+ residual_dropout: 0.0
121
+ max_length: 2048
122
+ use_flash_attention: true
123
+ warmup_steps: 20
124
+ gradient_checkpointing: true
125
+ gradient_accumulation_steps: 12
126
+ per_device_train_batch_size: 2
127
+ per_device_eval_batch_size: 3
128
+ eval_steps: 101
129
+ save_steps: 485
130
+ num_train_epochs: 4
131
+ save_total_limit: 3
132
+ use_custom_sampler: true
133
+ sort_by_length: false
134
+ #save_strategy: steps
135
+ save_strategy: epoch
136
+ datasets:
137
+ - oasst_export:
138
+ lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
139
+ input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
140
+ val_split: 0.05
141
+ - vicuna:
142
+ val_split: 0.05
143
+ max_val_set: 800
144
+ fraction: 1.0
145
+ - dolly15k:
146
+ val_split: 0.05
147
+ max_val_set: 300
148
+ - grade_school_math_instructions:
149
+ val_split: 0.05
150
+ - code_alpaca:
151
+ val_split: 0.05
152
+ max_val_set: 250
153
+ ```
154
+
155
+ - **OASST dataset paper:** https://arxiv.org/abs/2304.07327