mrzjy commited on
Commit
23d1d33
1 Parent(s): 63c7686

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +230 -1
README.md CHANGED
@@ -11,4 +11,233 @@ tags:
11
  - SFT
12
  size_categories:
13
  - 100K<n<1M
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - SFT
12
  size_categories:
13
  - 100K<n<1M
14
+ ---
15
+
16
+ # Data for LLM ASCII Art
17
+ This repo contains open-sourced SFT data for fine-tuning LLMs on ASCII Art Generation.
18
+
19
+ ## Dataset Links
20
+
21
+ | Link | Language | Size |
22
+ |:----------------------------------------------------------------------------------------------------------------:|:-----------------:|:-------:|
23
+ | [ascii_art_generation_140k](https://huggingface.co/datasets/mrzjy/ascii_art_generation_140k) | English | 138,941 |
24
+ | [ascii_art_generation_140k_bilingual](https://huggingface.co/datasets/mrzjy/ascii_art_generation_140k_bilingual) | Chinese & English | 138,941 |
25
+
26
+ ## Data Preparation
27
+
28
+ ### Training data description
29
+
30
+ The training data consists of 138,941 ASCII arts instruction-response samples for LLMs to perform SFT.
31
+
32
+ The source images of these samples are either from [LAION-COCO-NLLB](https://huggingface.co/datasets/visheratin/laion-coco-nllb) (majority) or from [Imagenet-Sketch](https://github.com/HaohanWang/ImageNet-Sketch).
33
+
34
+ **Data Processing**
35
+
36
+ - **1) ASCII Art Conversion from Image:** All images are simply converted to ASCII art by [ascii-image-converter](https://github.com/TheZoraiz/ascii-image-converter) through the following command:
37
+
38
+ ```
39
+ ascii-image-converter path/to/image -m " .+#@/()" -H 20 --negative
40
+ ```
41
+
42
+ - **2) Blank Space Cropping:** Crop horizontal and vertical blank spaces to remove redundant space tokens. See the example below:
43
+
44
+ ```example
45
+ # Illustration of raw ascii art output
46
+ # by ascii-image-converter
47
+
48
+ # ============== Top corner =========================
49
+ \n
50
+ \n
51
+ . .#. \n
52
+ ++ .. +++++........ \n
53
+ +#+. .##+++++++++.............. \n
54
+ .++##########++++......++...++++.. \n
55
+ +##.+##++#++++...+++.++..... \n
56
+ +. ++ .#+. .+.. \n
57
+ \n
58
+ \n
59
+ \n
60
+ # ============== Bottom corner ======================
61
+
62
+ # After Blank Space Cropping
63
+ # ============== Top corner =========================
64
+ . .#.\n
65
+ ++ .. +++++........\n
66
+ +#+. .##+++++++++..............\n
67
+ .++##########++++......++...++++..\n
68
+ +##.+##++#++++...+++.++.....\n
69
+ +. ++ .#+. .+..\n
70
+ # ============== Bottom corner ======================
71
+ ```
72
+
73
+ - **3) Instruction-Response Sample Format:** We use a simple prompt template to format our training sample:
74
+
75
+ ```prompt
76
+ # template user instruction
77
+ Draw {caption} in ASCII art{instruction}
78
+
79
+ # template for assistant response
80
+ \```
81
+ {ascii_txt}
82
+ \```
83
+ ```
84
+
85
+ A real training sample:
86
+
87
+ ```json
88
+ {
89
+ "id": "6d8c10c8-df57-461d-8a88-2fe74f578f6f",
90
+ "conversations": [
91
+ {
92
+ "role": "user",
93
+ "content": "Draw The legs and heels are wearing orange shoes. in ASCII art in small size"
94
+ },
95
+ {
96
+ "role": "assistant",
97
+ "content": "```\n ......... .. ...\n ........ ... ...\n ........ ... ...\n ........ ... ..\n ....... ... ...\n ...... ... ...\n ...... ... ...\n ...... .......\n ....... ......\n ....+.+#. .......\n .......###..... ...+.\n ......##@@........+##.\n .....#@###... ....####\n .....@@+ .+.. ...+###.\n .+. .#@. .... . .##+++\n.++######@#. .... .##. .+\n++##++##++...++++...+##. .+\n +##@#########. .\n ............\n```"
98
+ }
99
+ ],
100
+ "meta": {
101
+ "width": 29,
102
+ "height": 19
103
+ }
104
+ }
105
+ ```
106
+
107
+ ```
108
+ # Visualization of the above sample
109
+ Instruction: Draw The legs and heels are wearing orange shoes. in ASCII art in small size
110
+ Response:
111
+ ......... .. ...
112
+ ........ ... ...
113
+ ........ ... ...
114
+ ........ ... ..
115
+ ....... ... ...
116
+ ...... ... ...
117
+ ...... ... ...
118
+ ...... .......
119
+ ....... ......
120
+ ....+.+#. .......
121
+ .......###..... ...+.
122
+ ......##@@........+##.
123
+ .....#@###... ....####
124
+ .....@@+ .+.. ...+###.
125
+ .+. .#@. .... . .##+++
126
+ .++######@#. .... .##. .+
127
+ ++##++##++...++++...+##. .+
128
+ +##@#########. .
129
+ ............
130
+ ```
131
+
132
+ **Data Filtering**
133
+
134
+ Not all processed ascii arts are of high quality. Here are several attempts to filter low-quality samples (~85% data samples are filtered, resulting our current training dataset).
135
+
136
+ - **1) Density:** Defined as 1 - the ratio of non-space character in all characters. We only keep samples with proper density (0.3 < density < 0.6)
137
+
138
+ ```python
139
+ """bad case to be filtered
140
+ @@@@@@@@///(/@#+......+#@((///////
141
+ ///////////(@. .++++. ..#((///////
142
+ (((////////(# .@@.## #((///////
143
+ (((////////(@+. .... ..#((///////
144
+ ///////////(/@@#+++.+##@/((///////
145
+ //////((/////(((((((((((((///////(
146
+ (//((//((//(((((//((((//(/((((((((
147
+ (((((/@((//((((((/(((((((((@((((((
148
+ (((((((((((((((#@(((/((((((@/+/(((
149
+ (((((///(((((((##(((((((((((/@/(((
150
+ """
151
+
152
+ def calculate_density(ascii_text):
153
+ space_cnt = 0
154
+ max_characters_per_line = max(len(l) for l in ascii_text.split("\n"))
155
+ n_lines = len(ascii_text.split("\n"))
156
+ for l in ascii_text.split("\n"):
157
+ for i, c in enumerate(l):
158
+ if c == " ":
159
+ space_cnt += 1
160
+ space_cnt += (max_characters_per_line - len(l))
161
+ return 1 - space_cnt / (max_characters_per_line * n_lines)
162
+ ```
163
+
164
+ - **2) Diversity:** Defined as 1 - the ratio of dot characters among all non-space characters. We filter samples of low diversity.
165
+
166
+ ```python
167
+ """bad case to be filtered
168
+ .+. .. ... .. .++ .+ +.
169
+ .+.+... .. .. . . . . .
170
+ .... .. ... .. .+ .. . .
171
+ ..... .+ ... .. + .. . .
172
+ .... .. ... . + .. . .
173
+ .... .. .+. . .. .. . .
174
+ ..... .. ... .. .. .. . .
175
+ ... .. ... .. .. .. . .
176
+ ...... .++ .+. .. .+ +. . .
177
+ """
178
+
179
+ def calculate_diversity(ascii_text):
180
+ dot_cnt, non_space_cnt = 0, 0
181
+ for c in ascii_text:
182
+ if c == ".":
183
+ dot_cnt += 1
184
+ if c != " ":
185
+ non_space_cnt += 1
186
+ return 1 - dot_cnt / non_space_cnt
187
+ ```
188
+
189
+ - **3) No Isolation:** We filter samples having isolated lines (all blank space for previous 3 lines)
190
+
191
+ ```python
192
+ """bad case to be filtered
193
+ ....
194
+ .... .. .
195
+ ..+++++++........ .
196
+ ..+++++++........ .
197
+ ...++++++.... . .
198
+ ...++++.+... .
199
+ . ........... .
200
+ . . ....... .
201
+ ...++........ .
202
+ ................ .
203
+ .++++ .. ..
204
+
205
+
206
+
207
+
208
+ +############.
209
+ #/@@@@@@@@@@@+
210
+ """
211
+
212
+ def check_isolation(ascii_text):
213
+ lines = ascii_text.split("\n")
214
+ for i, l in enumerate(lines):
215
+ has_character = False
216
+ for c in l:
217
+ if c != " ":
218
+ has_character = True
219
+ break
220
+
221
+ if has_character and i > 3:
222
+ # check whether there's prev and after 2 lines
223
+ isolation_from_previous = True
224
+ for prev_i in range(max(0, i-3), i):
225
+ if not all(c == " " for c in lines[prev_i]):
226
+ isolation_from_previous = False
227
+ break
228
+ if isolation_from_previous:
229
+ return True
230
+ return False
231
+ ```
232
+
233
+ **Bilingual Version**
234
+
235
+ Apart from the English dataset, we add a Chinese-English bilingual version dataset, where 50% of the image captions are changed to Chinese (thanks to the translations from laion-coco-nllb).
236
+
237
+ ```{'zh': 69777, 'en': 69164}```
238
+
239
+ Note that the total number of training samples are always 138,941.
240
+
241
+ ## Limitations
242
+
243
+ - Color: Current implementation only supports black and white ascii art generation (Although there're inevitably color descriptions in training samples, we have no choice but ignore them for now.). Adding additional prediction head for RGB colors could be worth trying. You can find colored ascii art examples in [ascii-image-converter](https://github.com/TheZoraiz/ascii-image-converter).