RichardErkhov commited on
Commit
932cf49
1 Parent(s): ecd092a

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +299 -0
README.md ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ text-to-cypher - GGUF
11
+ - Model creator: https://huggingface.co/diegomiranda/
12
+ - Original model: https://huggingface.co/diegomiranda/text-to-cypher/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [text-to-cypher.Q2_K.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q2_K.gguf) | Q2_K | 0.04GB |
18
+ | [text-to-cypher.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.IQ3_XS.gguf) | IQ3_XS | 0.04GB |
19
+ | [text-to-cypher.IQ3_S.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.IQ3_S.gguf) | IQ3_S | 0.04GB |
20
+ | [text-to-cypher.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q3_K_S.gguf) | Q3_K_S | 0.04GB |
21
+ | [text-to-cypher.IQ3_M.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.IQ3_M.gguf) | IQ3_M | 0.04GB |
22
+ | [text-to-cypher.Q3_K.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q3_K.gguf) | Q3_K | 0.04GB |
23
+ | [text-to-cypher.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q3_K_M.gguf) | Q3_K_M | 0.04GB |
24
+ | [text-to-cypher.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q3_K_L.gguf) | Q3_K_L | 0.04GB |
25
+ | [text-to-cypher.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.IQ4_XS.gguf) | IQ4_XS | 0.04GB |
26
+ | [text-to-cypher.Q4_0.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q4_0.gguf) | Q4_0 | 0.04GB |
27
+ | [text-to-cypher.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.IQ4_NL.gguf) | IQ4_NL | 0.04GB |
28
+ | [text-to-cypher.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q4_K_S.gguf) | Q4_K_S | 0.04GB |
29
+ | [text-to-cypher.Q4_K.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q4_K.gguf) | Q4_K | 0.05GB |
30
+ | [text-to-cypher.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q4_K_M.gguf) | Q4_K_M | 0.05GB |
31
+ | [text-to-cypher.Q4_1.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q4_1.gguf) | Q4_1 | 0.05GB |
32
+ | [text-to-cypher.Q5_0.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q5_0.gguf) | Q5_0 | 0.05GB |
33
+ | [text-to-cypher.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
34
+ | [text-to-cypher.Q5_K.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q5_K.gguf) | Q5_K | 0.05GB |
35
+ | [text-to-cypher.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
36
+ | [text-to-cypher.Q5_1.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q5_1.gguf) | Q5_1 | 0.05GB |
37
+ | [text-to-cypher.Q6_K.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q6_K.gguf) | Q6_K | 0.06GB |
38
+ | [text-to-cypher.Q8_0.gguf](https://huggingface.co/RichardErkhov/diegomiranda_-_text-to-cypher-gguf/blob/main/text-to-cypher.Q8_0.gguf) | Q8_0 | 0.07GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - en
47
+ library_name: transformers
48
+ tags:
49
+ - gpt
50
+ - llm
51
+ - large language model
52
+ - h2o-llmstudio
53
+ inference: false
54
+ thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
55
+ ---
56
+ # Model Card
57
+ ## Summary
58
+
59
+ This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
60
+ - Base model: [EleutherAI/pythia-70m-deduped-v0](https://huggingface.co/EleutherAI/pythia-70m-deduped-v0)
61
+
62
+ ## Usage on CPU
63
+
64
+ ```bash
65
+ pip install transformers==4.30.2
66
+ pip install accelerate==0.20.3
67
+ pip install torch==2.0.1
68
+ ```
69
+
70
+ ```python
71
+ from transformers import AutoModelForCausalLM, AutoTokenizer
72
+ import torch
73
+
74
+ def generate_response(prompt, model_name):
75
+ tokenizer = AutoTokenizer.from_pretrained(
76
+ model_name,
77
+ use_fast=True,
78
+ trust_remote_code=True,
79
+ )
80
+
81
+ model = AutoModelForCausalLM.from_pretrained(
82
+ model_name,
83
+ torch_dtype=torch.float32,
84
+ device_map={"": "cpu"},
85
+ trust_remote_code=True,
86
+ )
87
+ model.cpu().eval()
88
+
89
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cpu")
90
+
91
+ tokens = model.generate(
92
+ input_ids=inputs["input_ids"],
93
+ attention_mask=inputs["attention_mask"],
94
+ min_new_tokens=2,
95
+ max_new_tokens=500,
96
+ do_sample=False,
97
+ num_beams=2,
98
+ temperature=float(0.0),
99
+ repetition_penalty=float(1.0),
100
+ renormalize_logits=True
101
+ )[0]
102
+
103
+ tokens = tokens[inputs["input_ids"].shape[1]:]
104
+ answer = tokenizer.decode(tokens, skip_special_tokens=True)
105
+
106
+ return answer
107
+ ```
108
+
109
+ Once you've defined the function, you can proceed to set up the prompt and the model
110
+
111
+ ```python
112
+ model_name = "diegomiranda/text-to-cypher"
113
+ prompt = "Create a Cypher statement to answer the following question:Retorne os processos de Direito Tributário que se baseiam em lei 939 de 1992?<|endoftext|>"
114
+ response = generate_response(prompt, model_name)
115
+ print(response)
116
+ ```
117
+
118
+
119
+ ## Usage
120
+
121
+ To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
122
+
123
+ ```bash
124
+ pip install transformers==4.31.0
125
+ ```
126
+
127
+ Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
128
+ - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
129
+ ```python
130
+ import huggingface_hub
131
+ huggingface_hub.login(<ACCES_TOKEN>)
132
+ ```
133
+ - Or directly pass your <ACCES_TOKEN> to `token` in the `pipeline`
134
+
135
+ ```python
136
+ from transformers import pipeline
137
+
138
+ generate_text = pipeline(
139
+ model="diegomiranda/text-to-cypher",
140
+ torch_dtype="auto",
141
+ trust_remote_code=True,
142
+ use_fast=True,
143
+ device_map={"": "cuda:0"},
144
+ token=True,
145
+ )
146
+
147
+ res = generate_text(
148
+ "Why is drinking water so healthy?",
149
+ min_new_tokens=2,
150
+ max_new_tokens=500,
151
+ do_sample=False,
152
+ num_beams=2,
153
+ temperature=float(0.0),
154
+ repetition_penalty=float(1.0),
155
+ renormalize_logits=True
156
+ )
157
+ print(res[0]["generated_text"])
158
+ ```
159
+
160
+ You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
161
+
162
+ ```python
163
+ print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
164
+ ```
165
+
166
+ ```bash
167
+ Why is drinking water so healthy?<|endoftext|>
168
+ ```
169
+
170
+ Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
171
+
172
+ ```python
173
+ from h2oai_pipeline import H2OTextGenerationPipeline
174
+ from transformers import AutoModelForCausalLM, AutoTokenizer
175
+
176
+ tokenizer = AutoTokenizer.from_pretrained(
177
+ "diegomiranda/text-to-cypher",
178
+ use_fast=True,
179
+ padding_side="left",
180
+ trust_remote_code=True,
181
+ )
182
+ model = AutoModelForCausalLM.from_pretrained(
183
+ "diegomiranda/text-to-cypher",
184
+ torch_dtype="auto",
185
+ device_map={"": "cuda:0"},
186
+ trust_remote_code=True,
187
+ )
188
+ generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
189
+
190
+ res = generate_text(
191
+ "Why is drinking water so healthy?",
192
+ min_new_tokens=2,
193
+ max_new_tokens=500,
194
+ do_sample=False,
195
+ num_beams=2,
196
+ temperature=float(0.0),
197
+ repetition_penalty=float(1.0),
198
+ renormalize_logits=True
199
+ )
200
+ print(res[0]["generated_text"])
201
+ ```
202
+
203
+
204
+ You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
205
+
206
+ ```python
207
+ from transformers import AutoModelForCausalLM, AutoTokenizer
208
+
209
+ model_name = "diegomiranda/text-to-cypher" # either local folder or huggingface model name
210
+ # Important: The prompt needs to be in the same format the model was trained with.
211
+ # You can find an example prompt in the experiment logs.
212
+ prompt = "How are you?<|endoftext|>"
213
+
214
+ tokenizer = AutoTokenizer.from_pretrained(
215
+ model_name,
216
+ use_fast=True,
217
+ trust_remote_code=True,
218
+ )
219
+ model = AutoModelForCausalLM.from_pretrained(
220
+ model_name,
221
+ torch_dtype="auto",
222
+ device_map={"": "cuda:0"},
223
+ trust_remote_code=True,
224
+ )
225
+ model.cuda().eval()
226
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
227
+
228
+ # generate configuration can be modified to your needs
229
+ tokens = model.generate(
230
+ input_ids=inputs["input_ids"],
231
+ attention_mask=inputs["attention_mask"],
232
+ min_new_tokens=2,
233
+ max_new_tokens=500,
234
+ do_sample=False,
235
+ num_beams=2,
236
+ temperature=float(0.0),
237
+ repetition_penalty=float(1.0),
238
+ renormalize_logits=True
239
+ )[0]
240
+
241
+ tokens = tokens[inputs["input_ids"].shape[1]:]
242
+ answer = tokenizer.decode(tokens, skip_special_tokens=True)
243
+ print(answer)
244
+ ```
245
+
246
+ ## Quantization and sharding
247
+
248
+ You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
249
+
250
+ ## Model Architecture
251
+
252
+ ```
253
+ GPTNeoXForCausalLM(
254
+ (gpt_neox): GPTNeoXModel(
255
+ (embed_in): Embedding(50304, 512)
256
+ (emb_dropout): Dropout(p=0.0, inplace=False)
257
+ (layers): ModuleList(
258
+ (0-5): 6 x GPTNeoXLayer(
259
+ (input_layernorm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
260
+ (post_attention_layernorm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
261
+ (post_attention_dropout): Dropout(p=0.0, inplace=False)
262
+ (post_mlp_dropout): Dropout(p=0.0, inplace=False)
263
+ (attention): GPTNeoXAttention(
264
+ (rotary_emb): GPTNeoXRotaryEmbedding()
265
+ (query_key_value): Linear(in_features=512, out_features=1536, bias=True)
266
+ (dense): Linear(in_features=512, out_features=512, bias=True)
267
+ (attention_dropout): Dropout(p=0.0, inplace=False)
268
+ )
269
+ (mlp): GPTNeoXMLP(
270
+ (dense_h_to_4h): Linear(in_features=512, out_features=2048, bias=True)
271
+ (dense_4h_to_h): Linear(in_features=2048, out_features=512, bias=True)
272
+ (act): GELUActivation()
273
+ )
274
+ )
275
+ )
276
+ (final_layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
277
+ )
278
+ (embed_out): Linear(in_features=512, out_features=50304, bias=False)
279
+ )
280
+ ```
281
+
282
+ ## Model Configuration
283
+
284
+ This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
285
+
286
+
287
+ ## Disclaimer
288
+
289
+ Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
290
+
291
+ - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
292
+ - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
293
+ - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
294
+ - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
295
+ - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
296
+ - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
297
+
298
+ By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
299
+