WesleyHsieh0806 commited on
Commit
7de324b
1 Parent(s): bc8a7b7

amodal annotations for visualization, training, and evaluation

Browse files
README.md CHANGED
@@ -1,147 +1,3 @@
1
- # TAO-Amodal Dataset
2
-
3
- <!-- Provide a quick summary of the dataset. -->
4
- Official Source for Downloading the TAO-Amodal Dataset.
5
-
6
- [**📙 Project Page**](https://tao-amodal.github.io/) | [**💻 Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**📎 Paper Link**](https://arxiv.org/abs/2312.12433) | [**✏️ Citations**](#citations)
7
-
8
- <div align="center">
9
- <a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a>
10
- </div>
11
-
12
- </br>
13
-
14
- Contact: [🙋🏻‍♂️Cheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/)
15
-
16
- ## Dataset Description
17
- Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects.
18
- Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above).
19
- Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers
20
- through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).
21
-
22
- ### Dataset Download
23
- 1. Download all the annotations.
24
- ```bash
25
- git lfs install
26
- git clone [email protected]:datasets/chengyenhsieh/TAO-Amodal
27
- ```
28
-
29
- 2. Download all the video frames:
30
-
31
- You can either download the frames following the instructions [here](https://motchallenge.net/tao_download.php) (recommended) or modify our provided [script](./download_TAO.sh) and run
32
- ```bash
33
- bash download_TAO.sh
34
- ```
35
-
36
-
37
-
38
-
39
- ## 📚 Dataset Structure
40
-
41
- The dataset should be structured like this:
42
- ```bash
43
- ├── frames
44
- └── train
45
- ├── ArgoVerse
46
- ├── BDD
47
- ├── Charades
48
- ├── HACS
49
- ├── LaSOT
50
- └── YFCC100M
51
- ├── amodal_annotations
52
- ├── train/validation/test.json
53
- ├── train_lvis_v1.json
54
- └── validation_lvis_v1.json
55
- ├── example_output
56
- └── prediction.json
57
- └── BURST_annotations
58
- └── train
59
- └── train_visibility.json
60
-
61
- ```
62
-
63
- ## 📚 File Descriptions
64
-
65
- | File Name | Description |
66
- | ------------------ | ---------------------------------- |
67
- | train/validation/test.json | Formal annotation files. We use these annotations for visualization. Categories include those in [lvis](https://www.lvisdataset.org/) v0.5 and freeform categories. |
68
- | train_lvis_v1.json | We use this file to train our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander), treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0. |
69
- | validation_lvis_v1.json | We use this file to evaluate our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander). Categories are aligned with those in lvis v1.0. |
70
- | prediction.json | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our [evaluation toolkit](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#bar_chart-evaluation). |
71
- | BURST_annotations/XXX.json | Modal mask annotations from [BURST dataset](https://github.com/Ali2500/BURST-benchmark) with our heuristic visibility attributes. We provide these files for the convenience of visualization |
72
-
73
- ### Annotation and Prediction Format
74
-
75
- Our annotations are structured similarly as [TAO](https://github.com/TAO-Dataset/annotations) with some modifications.
76
- Annotations:
77
- ```bash
78
-
79
- Annotation file format:
80
- {
81
- "info" : info,
82
- "images" : [image],
83
- "videos": [video],
84
- "tracks": [track],
85
- "annotations" : [annotation],
86
- "categories": [category],
87
- "licenses" : [license],
88
- }
89
- annotation: {
90
- "id": int,
91
- "image_id": int,
92
- "track_id": int,
93
- "bbox": [x,y,width,height],
94
- "area": float,
95
-
96
- # Redundant field for compatibility with COCO scripts
97
- "category_id": int,
98
- "video_id": int,
99
-
100
- # Other important attributes for evaluation on TAO-Amodal
101
- "amodal_bbox": [x,y,width,height],
102
- "amodal_is_uncertain": bool,
103
- "visibility": float, (0.~1.0)
104
- }
105
- image, info, video, track, category, licenses, : Same as TAO
106
- ```
107
-
108
- Predictions should be structured as:
109
-
110
- ```bash
111
- [{
112
- "image_id" : int,
113
- "category_id" : int,
114
- "bbox" : [x,y,width,height],
115
- "score" : float,
116
- "track_id": int,
117
- "video_id": int
118
- }]
119
- ```
120
- Refer to the instructions of [TAO dataset](https://github.com/TAO-Dataset/tao/blob/master/docs/evaluation.md) for further details
121
-
122
- ## 📺 Example Sequences
123
- Check [here](https://tao-amodal.github.io/#TAO-Amodal) for more examples and [here](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#artist-visualization) for visualization code.
124
- [<img src="https://tao-amodal.github.io/static/images/car_and_bus.png" width="50%">](https://tao-amodal.github.io/dataset.html "tao-amodal")
125
-
126
-
127
-
128
- ## Citation
129
-
130
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
131
- ```
132
- @misc{hsieh2023tracking,
133
- title={Tracking Any Object Amodally},
134
- author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
135
- year={2023},
136
- eprint={2312.12433},
137
- archivePrefix={arXiv},
138
- primaryClass={cs.CV}
139
- }
140
- ```
141
-
142
- ---
143
- task_categories:
144
- - object-detection
145
- - multi-object-tracking
146
- license: mit
147
- ---
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d49e8f4a19e17187986b38cda99ce55c98c03f7eae159df28b6b7fc40c4b272f
3
+ size 5416
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
amodal_annotations/{test_with_freeform_amodal_boxes_Jan10_2023_oof_visibility.json → test.json} RENAMED
File without changes
amodal_annotations/{train_with_freeform_amodal_boxes_may12_2022_oof_visibility.json → train.json} RENAMED
File without changes
amodal_annotations/train_lvis_v1.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d6481ee8465eb1c2d54a3e49428fb62ae0596a89f739eb9951b4a6319fe01e8
3
+ size 29326966
amodal_annotations/{validation_with_freeform_amodal_boxes_Aug10_2022_oof_visibility.json → validation.json} RENAMED
File without changes
amodal_annotations/validation_lvis_v1.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30bd767b05912e808c1c011ac1dd9bb33fb40c7c671ac3e3b3a015f86129ec24
3
+ size 66643652