Suparious commited on
Commit
202f1f4
1 Parent(s): 2d76893

add model card

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - finetuned
4
+ - quantized
5
+ - 4-bit
6
+ - AWQ
7
+ - transformers
8
+ - pytorch
9
+ - mistral
10
+ - instruct
11
+ - text-generation
12
+ - conversational
13
+ - autotrain_compatible
14
+ - endpoints_compatible
15
+ - text-generation-inference
16
+ - region:us
17
+ - finetune
18
+ - chatml
19
+ - DPO
20
+ - RLHF
21
+ - gpt4
22
+ - synthetic data
23
+ - distillation
24
+ license: apache-2.0
25
+ datasets:
26
+ - ai2_arc
27
+ - allenai/ultrafeedback_binarized_cleaned
28
+ - argilla/distilabel-intel-orca-dpo-pairs
29
+ - jondurbin/airoboros-3.2
30
+ - codeparrot/apps
31
+ - facebook/belebele
32
+ - bluemoon-fandom-1-1-rp-cleaned
33
+ - boolq
34
+ - camel-ai/biology
35
+ - camel-ai/chemistry
36
+ - camel-ai/math
37
+ - camel-ai/physics
38
+ - jondurbin/contextual-dpo-v0.1
39
+ - jondurbin/gutenberg-dpo-v0.1
40
+ - jondurbin/py-dpo-v0.1
41
+ - jondurbin/truthy-dpo-v0.1
42
+ - LDJnr/Capybara
43
+ - jondurbin/cinematika-v0.1
44
+ - WizardLM/WizardLM_evol_instruct_70k
45
+ - glaiveai/glaive-function-calling-v2
46
+ - jondurbin/gutenberg-dpo-v0.1
47
+ - grimulkan/LimaRP-augmented
48
+ - lmsys/lmsys-chat-1m
49
+ - ParisNeo/lollms_aware_dataset
50
+ - TIGER-Lab/MathInstruct
51
+ - Muennighoff/natural-instructions
52
+ - openbookqa
53
+ - kingbri/PIPPA-shareGPT
54
+ - piqa
55
+ - Vezora/Tested-22k-Python-Alpaca
56
+ - ropes
57
+ - cakiki/rosetta-code
58
+ - Open-Orca/SlimOrca
59
+ - b-mc2/sql-create-context
60
+ - squad_v2
61
+ - mattpscott/airoboros-summarization
62
+ - migtissera/Synthia-v1.3
63
+ - unalignment/toxic-dpo-v0.2
64
+ - WhiteRabbitNeo/WRN-Chapter-1
65
+ - WhiteRabbitNeo/WRN-Chapter-2
66
+ - winogrande
67
+ model_name: bagel-7b-v0.5
68
+ base_model: alpindale/Mistral-7B-v0.2-hf
69
+ quantized_by: Suparious
70
+ pipeline_tag: text-generation
71
+ model_creator: jondurbin
72
+ inference: false
73
+ prompt_template: '{bos}<|im_start|>{role}
74
+
75
+ {text}
76
+
77
+ <|im_end|>{eos} '
78
+ ---
79
+ # jondurbin/bagel-7b-v0.5 AWQ
80
+
81
+ - Model creator: [jondurbin](https://huggingface.co/jondurbin)
82
+ - Original model: [bagel-dpo-7b-v0.4](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4)
83
+
84
+ ![bagel](bagel.png)
85
+
86
+ ## Model Summary
87
+
88
+ This is a fine-tune of mistral-7b-v0.2 using the bagel v0.5 dataset.
89
+
90
+ See [bagel](https://github.com/jondurbin/bagel) for additional details on the datasets.
91
+
92
+ The DPO version will be available soon [here](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5)
93
+
94
+ ## How to use
95
+
96
+ ### Install the necessary packages
97
+
98
+ ```bash
99
+ pip install --upgrade autoawq autoawq-kernels
100
+ ```
101
+
102
+ ### Example Python code
103
+
104
+ ```python
105
+ from awq import AutoAWQForCausalLM
106
+ from transformers import AutoTokenizer, TextStreamer
107
+
108
+ model_path = "solidrust/bagel-7b-v0.5-AWQ"
109
+ system_message = "You are Bagel, incarnated a powerful AI with everything."
110
+
111
+ # Load model
112
+ model = AutoAWQForCausalLM.from_quantized(model_path,
113
+ fuse_layers=True)
114
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
115
+ trust_remote_code=True)
116
+ streamer = TextStreamer(tokenizer,
117
+ skip_prompt=True,
118
+ skip_special_tokens=True)
119
+
120
+ # Convert prompt to tokens
121
+ prompt_template = """\
122
+ <|im_start|>system
123
+ {system_message}<|im_end|>
124
+ <|im_start|>user
125
+ {prompt}<|im_end|>
126
+ <|im_start|>assistant"""
127
+
128
+ prompt = "You're standing on the surface of the Earth. "\
129
+ "You walk one mile south, one mile west and one mile north. "\
130
+ "You end up exactly where you started. Where are you?"
131
+
132
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
133
+ return_tensors='pt').input_ids.cuda()
134
+
135
+ # Generate output
136
+ generation_output = model.generate(tokens,
137
+ streamer=streamer,
138
+ max_new_tokens=512)
139
+ ```
140
+
141
+ ### About AWQ
142
+
143
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
144
+
145
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
146
+
147
+ It is supported by:
148
+
149
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
150
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
151
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
152
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
153
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
154
+
155
+ ## Prompt template: ChatML
156
+
157
+ ```plaintext
158
+ <|im_start|>system
159
+ {system_message}<|im_end|>
160
+ <|im_start|>user
161
+ {prompt}<|im_end|>
162
+ <|im_start|>assistant
163
+ ```