Datasets:

Modalities:
Image
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
License:
Somayeh-h commited on
Commit
0ade8fd
1 Parent(s): 4521915

Added the dataset imageNames, the .csv ground truth files, code to check the dataset, and the readme file.

Browse files
.gitattributes CHANGED
@@ -53,3 +53,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
Nordland_match.ipynb ADDED
@@ -0,0 +1,430 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import pandas as pd\n",
10
+ "import numpy as np\n",
11
+ "\n",
12
+ "# fall, winter & spring are aligned to summer, so read summer.csv\n",
13
+ "df=pd.read_csv('summer.csv', sep=',', header=0)\n",
14
+ "df.head()"
15
+ ]
16
+ },
17
+ {
18
+ "cell_type": "code",
19
+ "execution_count": null,
20
+ "metadata": {},
21
+ "outputs": [],
22
+ "source": [
23
+ "tids = df.values[:, 0] / 100 # format: hhmmss\n",
24
+ "lats = df.values[:, 1]\n",
25
+ "lons = df.values[:, 2]\n",
26
+ "speeds = df.values[:, 3]\n",
27
+ "courses = df.values[:, 4]\n",
28
+ "alts = df.values[:, 5]"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": null,
34
+ "metadata": {},
35
+ "outputs": [],
36
+ "source": [
37
+ "# go from their time-based representation to number of seconds\n",
38
+ "def val_to_sec(val):\n",
39
+ " if not isinstance(val, np.ndarray):\n",
40
+ " val = np.array([val])\n",
41
+ " hours = (val / 10000).astype(np.int)\n",
42
+ " minutes = ((val % 10000) / 100).astype(np.int)\n",
43
+ " secs = (val % 100).astype(np.int)\n",
44
+ " \n",
45
+ " absolute = hours * 3600 + minutes * 60 + secs\n",
46
+ " if len(absolute) == 1:\n",
47
+ " return int(absolute[0])\n",
48
+ " else:\n",
49
+ " return hours * 3600 + minutes * 60 + secs"
50
+ ]
51
+ },
52
+ {
53
+ "cell_type": "code",
54
+ "execution_count": null,
55
+ "metadata": {},
56
+ "outputs": [],
57
+ "source": [
58
+ "tids_abs_seconds = val_to_sec(tids)"
59
+ ]
60
+ },
61
+ {
62
+ "cell_type": "code",
63
+ "execution_count": null,
64
+ "metadata": {},
65
+ "outputs": [],
66
+ "source": [
67
+ "keep_indices = tids_abs_seconds <= val_to_sec(152840) # all data past 15h 28m 40s is not contained in the video, so remove it\n",
68
+ "# check that we have increasing timestamps\n",
69
+ "going_back_in_time = np.diff(tids_abs_seconds) <= 0\n",
70
+ "assert not np.any(going_back_in_time)\n",
71
+ "\n",
72
+ "speeds = speeds[keep_indices]\n",
73
+ "tids_abs_seconds = tids_abs_seconds[keep_indices]\n",
74
+ "start_val = val_to_sec(53806) # this is when the train starts moving"
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": null,
80
+ "metadata": {},
81
+ "outputs": [],
82
+ "source": [
83
+ "tids_abs_seconds_off = tids_abs_seconds - start_val"
84
+ ]
85
+ },
86
+ {
87
+ "cell_type": "code",
88
+ "execution_count": null,
89
+ "metadata": {},
90
+ "outputs": [],
91
+ "source": [
92
+ "tids_abs_seconds_off"
93
+ ]
94
+ },
95
+ {
96
+ "cell_type": "code",
97
+ "execution_count": null,
98
+ "metadata": {},
99
+ "outputs": [],
100
+ "source": [
101
+ "np.argwhere(np.diff(tids_abs_seconds_off) > 25).flatten() # just for debugging"
102
+ ]
103
+ },
104
+ {
105
+ "cell_type": "code",
106
+ "execution_count": null,
107
+ "metadata": {},
108
+ "outputs": [],
109
+ "source": [
110
+ "np.diff(tids_abs_seconds_off[tids_abs_seconds_off > 0]).sum() # train moves for ~35439 frames so we're not far off"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": null,
116
+ "metadata": {},
117
+ "outputs": [],
118
+ "source": [
119
+ "desired_times = np.arange(-168, 35768-168) # train starts moving at frame 168 so make everything relative to that"
120
+ ]
121
+ },
122
+ {
123
+ "cell_type": "code",
124
+ "execution_count": null,
125
+ "metadata": {},
126
+ "outputs": [],
127
+ "source": [
128
+ "match_indices = []\n",
129
+ "for desired_time in desired_times:\n",
130
+ " diffs = np.abs(tids_abs_seconds_off - desired_time)\n",
131
+ " best_idx = diffs.argmin()\n",
132
+ " match_indices.append(best_idx)"
133
+ ]
134
+ },
135
+ {
136
+ "cell_type": "code",
137
+ "execution_count": null,
138
+ "metadata": {},
139
+ "outputs": [],
140
+ "source": [
141
+ "new_img_ids, new_speeds, new_ref_times, new_lats, new_lons, new_courses, new_alts = [], [], [], [], [], [], []\n",
142
+ "for row_idx in range(35768):\n",
143
+ " new_img_ids.append(row_idx + 1)\n",
144
+ " new_match_idx = match_indices[row_idx]\n",
145
+ " new_speeds.append(speeds[new_match_idx])\n",
146
+ " new_ref_times.append(tids[new_match_idx])\n",
147
+ " new_lats.append(lats[new_match_idx] / 100000)\n",
148
+ " new_lons.append(lons[new_match_idx] / 100000)\n",
149
+ " new_courses.append(courses[new_match_idx])\n",
150
+ " new_alts.append(alts[new_match_idx])\n",
151
+ "new_img_ids = np.array(new_img_ids)\n",
152
+ "new_speeds = np.array(new_speeds)\n",
153
+ "new_ref_times = np.array(new_ref_times)\n",
154
+ "new_lats = np.array(new_lats)\n",
155
+ "new_lons = np.array(new_lons)\n",
156
+ "new_courses = np.array(new_courses)\n",
157
+ "new_alts = np.array(new_alts)"
158
+ ]
159
+ },
160
+ {
161
+ "cell_type": "code",
162
+ "execution_count": null,
163
+ "metadata": {},
164
+ "outputs": [],
165
+ "source": [
166
+ "np.savez('nordland_aligned.npz',\n",
167
+ " img_id=new_img_ids,\n",
168
+ " speed=new_speeds,\n",
169
+ " ref_time=new_ref_times,\n",
170
+ " lat=new_lats,\n",
171
+ " lon=new_lons,\n",
172
+ " course=new_courses,\n",
173
+ " alt=new_alts)"
174
+ ]
175
+ },
176
+ {
177
+ "cell_type": "markdown",
178
+ "metadata": {},
179
+ "source": [
180
+ "### Sanity check from manually found matches"
181
+ ]
182
+ },
183
+ {
184
+ "cell_type": "code",
185
+ "execution_count": null,
186
+ "metadata": {},
187
+ "outputs": [],
188
+ "source": [
189
+ "rows_frames = np.array([ # manually found matches (when does train start/stop moving) -> frame number in video\n",
190
+ " 168, 1290, 1792, 2211, 2295, 2501, 2655, 3405, 3668, 5072, 5460, 7080, 7277, 7772, 7870, 10050, 10200, 11670, 11880, 13360, 14835, 19740,\n",
191
+ " 20040, 24120, 24390, 26410, 26535, 28975, 29090, 31090, 31185, 32400, 33040, 35130, 35177, 35608,\n",
192
+ "])\n",
193
+ "points_gps = [ # manually found matches -> time stamp in GPS data\n",
194
+ " 5380600, 5564200, 6050800, 6120100, 6133500, 6165300, 6193200, 6315700, 6363000, 6593500, 7061900, 7331200, 7363400, 7444600, 7462800,\n",
195
+ " 8225000, 8250600, 8494500, 8531900, 9175300, 9421800, 11040900, 11092800, 12171300, 12215500, 12552800, 12573600, 13381000,\n",
196
+ " 13400700, 14132000, 14150200, 14350900, 14455900, 15204500, 15213600, 15284000,\n",
197
+ "]"
198
+ ]
199
+ },
200
+ {
201
+ "cell_type": "code",
202
+ "execution_count": null,
203
+ "metadata": {},
204
+ "outputs": [],
205
+ "source": [
206
+ "rows_gps = []\n",
207
+ "for point_gps in points_gps:\n",
208
+ " diffs = np.abs(tids - point_gps / 100)\n",
209
+ " best_idx = diffs.argmin()\n",
210
+ " rows_gps.append(best_idx)"
211
+ ]
212
+ },
213
+ {
214
+ "cell_type": "code",
215
+ "execution_count": null,
216
+ "metadata": {},
217
+ "outputs": [],
218
+ "source": [
219
+ "np.array(rows_gps) # as we can see we're close enough"
220
+ ]
221
+ },
222
+ {
223
+ "cell_type": "code",
224
+ "execution_count": null,
225
+ "metadata": {},
226
+ "outputs": [],
227
+ "source": [
228
+ "np.array(match_indices)[np.array(rows_frames)]"
229
+ ]
230
+ },
231
+ {
232
+ "cell_type": "code",
233
+ "execution_count": null,
234
+ "metadata": {},
235
+ "outputs": [],
236
+ "source": [
237
+ "abs_diff = np.abs(np.array(rows_gps)-np.array(match_indices)[np.array(rows_frames)])\n",
238
+ "np.mean(abs_diff), np.max(abs_diff)"
239
+ ]
240
+ },
241
+ {
242
+ "cell_type": "markdown",
243
+ "metadata": {},
244
+ "source": [
245
+ "## Build dbStruct matlab file"
246
+ ]
247
+ },
248
+ {
249
+ "cell_type": "code",
250
+ "execution_count": null,
251
+ "metadata": {},
252
+ "outputs": [],
253
+ "source": [
254
+ "%load_ext autoreload\n",
255
+ "%autoreload 2\n",
256
+ "\n",
257
+ "import sys\n",
258
+ "sys.path.append('../pytorch-NetVlad-Nanne')\n",
259
+ "\n",
260
+ "from datasets import parse_db_struct, save_db_struct, dbStruct"
261
+ ]
262
+ },
263
+ {
264
+ "cell_type": "code",
265
+ "execution_count": null,
266
+ "metadata": {},
267
+ "outputs": [],
268
+ "source": [
269
+ "tunnels = [(1870, 2029), (2313, 2333), (2341, 2355), (4093, 4097), (6501, 6506), (6756, 6773), (8479, 8484), (8489, 8494), (9967, 9979), (10239, 10268), (10408, 10416), (10944, 10947),\n",
270
+ " (10985, 10991), (10997, 11003), (11008, 11019), (11022, 11028), (11030, 11032),\n",
271
+ " (11037, 11048), (11057, 11065), (11101, 11107), (11129, 11146), (11225, 11228), (11280, 11286), (11915, 12036), (12057, 12062), (12074, 12082), (12165, 12168), (12204, 12208), (12319, 12365),\n",
272
+ " (12409, 12417), (12472, 12481), (13620, 13628), (14320, 14348), (14390, 14400), (16203, 16206), (16472, 16484), (16690, 16695), (16933, 16936), (17054, 17068), (17177, 17183), (17734, 17756),\n",
273
+ " (17868, 17902), (17974, 17986), (17991, 17996), (18161, 18170), (18330, 18443), (18540, 18550), (18580, 18588), (18661, 18683), (18955, 18966), (18977, 18986), (19019, 19026), (19092, 19100),\n",
274
+ " (19170, 19185), (20310, 20354), (20540, 20547), (20594, 20599), (20737, 20760), (21058, 21063), (21478, 21499), (21832, 21872), (21947, 21961), (21986, 22003), (22014, 22030), (22037, 22048),\n",
275
+ " (22149, 22152), (22174, 22197), (22212, 22241), (22249, 22251), (22263, 22269), (22279, 22344), (22358, 22361), (22397, 22430), (22442, 22450), (22483, 22502), (22571, 22578), (22593, 22596),\n",
276
+ " (22944, 22950), (22999, 23004), (23026, 23029), (23045, 23049), (23141, 23148), (23166, 23171), (23197, 23214), (23402, 23407), (23486, 23493), (23496, 23503), (23519, 23534), (23571, 23577),\n",
277
+ " (23593, 23598), (23666, 23675), (23691, 23703), (23707, 23711), (23842, 23855), (23950, 23955), (24988, 24997), (25004, 25030), (25037, 25044), (25256, 25320), (25373, 25380), (25398, 25406),\n",
278
+ " (25507, 25521), (25825, 25846), (26086, 26091), (26120, 26135), (26890, 26897), (26997, 27012), (27408, 27423), (27432, 27435), (27926, 27943), (28687, 28693), (29321, 29331), (29384, 29421),\n",
279
+ " (29525, 29532), (29693, 29707), (29974, 29981), (29994, 30010), (30073, 30091), (30103, 30106), (30137, 30142), (30174, 30179), (30204, 30211), (31301, 31325), (31332, 31340), (31396, 31410),\n",
280
+ " (31433, 31437), (31448, 31482), (31492, 31551), (31611, 31628), (31666, 31712), (31748, 31796), (31823, 31828), (31831, 31836), (31848, 31865), (31903, 31965), (31998, 32062), (32102, 32128),\n",
281
+ " (32143, 32165), (32214, 32242), (32344, 32348), (33317, 33328), (33341, 33346), (33370, 33386), (33430, 33513), (33717, 33721), (33754, 33781), (33917, 33923), (34234, 34242), (34631, 34655),\n",
282
+ " (34742, 34757), (34775, 34811), (34849, 34857), (34978, 34992), (35362, 35366), (35386, 35390), (35395, 35400), (35430, 35440), (35464, 35466)]\n",
283
+ "filter_tunnels = np.array(np.ones(len(new_img_ids)), dtype=np.bool)\n",
284
+ "last = 0\n",
285
+ "for tunnel in tunnels:\n",
286
+ " # print(tunnel[1]-tunnel[0])\n",
287
+ " # print(tunnel[1])\n",
288
+ " assert tunnel[0] > last\n",
289
+ " last = tunnel[1]\n",
290
+ " filter_tunnels[tunnel[0]-1:tunnel[1]-1] = False"
291
+ ]
292
+ },
293
+ {
294
+ "cell_type": "code",
295
+ "execution_count": null,
296
+ "metadata": {},
297
+ "outputs": [],
298
+ "source": [
299
+ "filter_speed = new_speeds > 1500\n",
300
+ "all_filters = np.logical_and(filter_speed, filter_tunnels)\n",
301
+ "max_im_num = 10000000000000 # 10000000000 for all\n",
302
+ "\n",
303
+ "whichSet = 'test'\n",
304
+ "dataset = 'nordland'\n",
305
+ "dbImage = ['images-%05d.png' % img_id for img_id in new_img_ids[all_filters][:max_im_num]]\n",
306
+ "qImage = dbImage"
307
+ ]
308
+ },
309
+ {
310
+ "cell_type": "code",
311
+ "execution_count": null,
312
+ "metadata": {},
313
+ "outputs": [],
314
+ "source": [
315
+ "numDb = len(dbImage)\n",
316
+ "numQ = len(qImage)\n",
317
+ "\n",
318
+ "posDistThr = 2\n",
319
+ "posDistSqThr = posDistThr**2\n",
320
+ "nonTrivPosDistSqThr = 100\n",
321
+ "\n",
322
+ "gpsDb = np.vstack((new_lats[all_filters][:max_im_num], new_lons[all_filters][:max_im_num])).T\n",
323
+ "gpsQ = gpsDb\n",
324
+ "\n",
325
+ "utmDb = np.vstack((range(numDb), range(numDb))).T\n",
326
+ "utmQ = utmDb\n",
327
+ "# utmQ = None; utmDb = None; \n",
328
+ "\n",
329
+ "dbTimeStamp = None; qTimeStamp = None\n",
330
+ "\n",
331
+ "db = dbStruct(whichSet, dataset, dbImage, utmDb, qImage, utmQ, numDb, numQ, posDistThr,\n",
332
+ " posDistSqThr, nonTrivPosDistSqThr, dbTimeStamp, qTimeStamp, gpsDb, gpsQ)\n",
333
+ "\n",
334
+ "save_db_struct('nordland.mat', db)"
335
+ ]
336
+ },
337
+ {
338
+ "cell_type": "code",
339
+ "execution_count": null,
340
+ "metadata": {},
341
+ "outputs": [],
342
+ "source": [
343
+ "from sklearn.neighbors import NearestNeighbors\n",
344
+ "knn = NearestNeighbors(n_jobs=-1)\n",
345
+ "knn.fit(db.utmDb)\n",
346
+ "distances, positives = knn.radius_neighbors(db.utmQ, radius=db.posDistThr)"
347
+ ]
348
+ },
349
+ {
350
+ "cell_type": "code",
351
+ "execution_count": null,
352
+ "metadata": {},
353
+ "outputs": [],
354
+ "source": [
355
+ "positives"
356
+ ]
357
+ },
358
+ {
359
+ "cell_type": "markdown",
360
+ "metadata": {},
361
+ "source": [
362
+ "### Other stuff"
363
+ ]
364
+ },
365
+ {
366
+ "cell_type": "code",
367
+ "execution_count": null,
368
+ "metadata": {
369
+ "scrolled": true
370
+ },
371
+ "outputs": [],
372
+ "source": [
373
+ "import os\n",
374
+ "source_dir = '/media/storage_hdd/Datasets/nordland/640x320-color/'\n",
375
+ "dest_dir = '/media/storage_hdd/Datasets/nordland/640x320-color-filtered/'\n",
376
+ "for season in ['summer', 'spring', 'fall', 'winter']:\n",
377
+ " os.makedirs(os.path.join(dest_dir, season))\n",
378
+ " for idx, im in enumerate(dbImage):\n",
379
+ " os.symlink(os.path.join(source_dir, season, im), os.path.join(dest_dir, season, 'filtered-%05d.png' % idx))"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "code",
384
+ "execution_count": null,
385
+ "metadata": {},
386
+ "outputs": [],
387
+ "source": [
388
+ "with open('nordland_matches.txt', 'w') as outfile:\n",
389
+ " for im_name1 in dbImage:\n",
390
+ " for im_name2 in dbImage:\n",
391
+ " outfile.write('summer/' + im_name1 + ' ' + 'winter/' + im_name2 + '\\n')"
392
+ ]
393
+ },
394
+ {
395
+ "cell_type": "markdown",
396
+ "metadata": {},
397
+ "source": [
398
+ "__End__"
399
+ ]
400
+ }
401
+ ],
402
+ "metadata": {
403
+ "kernelspec": {
404
+ "display_name": "Python [conda env:netvlad20]",
405
+ "language": "python",
406
+ "name": "conda-env-netvlad20-py"
407
+ },
408
+ "language_info": {
409
+ "codemirror_mode": {
410
+ "name": "ipython",
411
+ "version": 3
412
+ },
413
+ "file_extension": ".py",
414
+ "mimetype": "text/x-python",
415
+ "name": "python",
416
+ "nbconvert_exporter": "python",
417
+ "pygments_lexer": "ipython3",
418
+ "version": "3.7.7"
419
+ },
420
+ "widgets": {
421
+ "application/vnd.jupyter.widget-state+json": {
422
+ "state": {},
423
+ "version_major": 2,
424
+ "version_minor": 0
425
+ }
426
+ }
427
+ },
428
+ "nbformat": 4,
429
+ "nbformat_minor": 4
430
+ }
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+
5
+ ## Nordland dataset
6
+
7
+ This dataset is from the original videos released here: [https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/](https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/)
8
+
9
+
10
+ ### Citation Information
11
+
12
+ Please cite the original publication if you use this dataset.
13
+
14
+ Sünderhauf, Niko, Peer Neubert, and Peter Protzel. "Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons." Proc. of Workshop on Long-Term Autonomy, IEEE International Conference on Robotics and Automation (ICRA). 2013.
15
+
16
+ ```bibtex
17
+ @inproceedings{sunderhauf2013we,
18
+ title={Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons},
19
+ author={S{\"u}nderhauf, Niko and Neubert, Peer and Protzel, Peter},
20
+ booktitle={Proc. of workshop on long-term autonomy, IEEE international conference on robotics and automation (ICRA)},
21
+ pages={2013},
22
+ year={2013}
23
+ }
24
+ ```
25
+
26
+ ### Dataset Description
27
+
28
+ The Nordland dataset captures a 728 km railway journey in Norway across four seasons: spring, summer, fall, and winter.
29
+ It is organized into four folders, each named after a season and containing 35,768 images.
30
+
31
+ These images maintain a one-to-one correspondence across folders.
32
+ For each traverse, the corresponding ground truth data is available in designated .csv files.
33
+
34
+ We have also included a file named `nordland_imageNames.txt`, which offers a filtered list of images.
35
+ This selection excludes segments captured when the train's speed fell below 15 km/h, as determined by the accompanying GPS data.
36
+
37
+
38
+ ### Our utilisation
39
+
40
+ We have used this dataset for the three publications below:
41
+
42
+ * Ensembles of Modular SNNs with/without sequence matching: [Applications of Spiking Neural Networks in Visual Place Recognition](https://arxiv.org/abs/2311.13186)
43
+
44
+ * Modular SNN: [Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition (ICRA 2023)](https://arxiv.org/abs/2209.08723) DOI: [10.1109/ICRA48891.2023.10160749](https://doi.org/10.1109/ICRA48891.2023.10160749)
45
+
46
+ * Non-modular SNN: [Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments (RAL + ICRA2022)](https://arxiv.org/abs/2109.06452) DOI: [10.1109/LRA.2022.3149030](https://doi.org/10.1109/LRA.2022.3149030)
47
+
48
+
49
+ The code for our three papers mentioned above is publicly available at: [https://github.com/QVPR/VPRSNN](https://github.com/QVPR/VPRSNN)
50
+
51
+
52
+
53
+
annotations/fall.csv ADDED
The diff for this file is too large to render. See raw diff
 
annotations/spring.csv ADDED
The diff for this file is too large to render. See raw diff
 
annotations/summer.csv ADDED
The diff for this file is too large to render. See raw diff
 
annotations/winter.csv ADDED
The diff for this file is too large to render. See raw diff
 
dataset_imageNames/nordland_imageNames.txt ADDED
The diff for this file is too large to render. See raw diff