fhofmann commited on
Commit
52d4fa6
1 Parent(s): 42da6b9

Readme, splits, metrics

Browse files
Files changed (3) hide show
  1. README.md +82 -3
  2. metrics.md +29 -0
  3. splits_final.json +0 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - fhofmann/VertebralBodiesCT-Labels
5
+ pipeline_tag: image-segmentation
6
+ tags:
7
+ - medical
8
+ ---
9
+
10
+ # Model Card for VertebralBodiesCT-ResEncL
11
+ VertebralBodiesCT-ResEncL model is a deep learning segmentation model built using the [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) framework.
12
+ It is designed to identify thoracic and lumbar vertebral bodies in CT scans, specifically excluding the vertebral arches and spinous processes.
13
+
14
+ **This is the larger, more performant model trained using 5 folds with the [ResEnc L presets](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md#how-to-use-the-new-presets).
15
+ A smaller, faster version using a single fold with the ResEnc M presets is available [here](https://huggingface.co/fhofmann/VertebralBodiesCT-ResEncM).**
16
+
17
+ ## Model Details
18
+ This model was developed for segmenting thoracic and lumbar vertebral bodies in CT scans, with a focus on anatomical landmark identification for applications such as body composition analysis and sarcopenia assessment.
19
+ It was trained on a dataset derived from the [TotalSegmentator](https://github.com/wasserth/TotalSegmentator/) and [VerSe](https://github.com/anjany/verse) datasets, which were refined [to focus solely on the vertebral bodies](https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels/blob/main/suppl/3_vertebralbodies.md)
20
+ The model is available under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) and accompanied by the [training labels](https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels) and a [pipeline](https://github.com/fohofmann/BodyComposition) for body composition analysis.
21
+
22
+ ## Uses
23
+ The VertebralBodiesCT-ResEncL model is intended for segmenting thoracic and lumbar vertebral bodies in CT scans.
24
+ For instance after integration into a [pipeline](https://github.com/fohofmann/BodyComposition), it can be used for localizing measurements relative to anatomical landmarks.
25
+ The model excludes vertebral arches and spinous processes, making it unsuitable for whole spine analysis.
26
+ It is not intended for clinical application.
27
+
28
+ ## Bias, Risks, and Limitations
29
+ - Data Source Bias: The model, despite being trained in a heterogenous dataset, may not fully represent diverse populations, imaging techniques, or scanner types.
30
+ - Misfunction in Pathologies: The model may perform poorly in cases with uncommon or complex spinal pathologies not well-represented in the dataset.
31
+ - Persistent Labeling Errors: Some labeling errors may persist in the training dataset. This could cause segmentation errors, especially in cases with transitional vertebrae.
32
+ - Exclusion of Cervical Spine: The model is designed only for thoracic and lumbar vertebrae and the sacrum.
33
+ - Exclusion of Vertebral Arches and Spinous Processes: The labels focus solely on vertebral bodies, excluding vertebral arches and spinous processes.
34
+ - No Clinical Usage: This model is intended solely for research purposes.
35
+
36
+ ## How to Get Started with the Model
37
+ The model is a standard, unmodified [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) model.
38
+ Installation instructions for nnU-Net can be found [here](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/installation_instructions.md), with more information on inference [here](https://github.com/MIC-DKFZ/nnUNet/tree/master/nnunetv2/inference).
39
+ We provide a integration into a [Body Composition analysis pipeline](https://github.com/fohofmann/BodyComposition).
40
+
41
+ ## Training Details
42
+ The training dataset consists of 1216 labels corresponding to CT scans derived from the [TotalSegmentator](https://github.com/wasserth/TotalSegmentator/) and [VerSe](https://github.com/anjany/verse) datasets.
43
+ Detailed information on the labeling process can be found [here](https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels).
44
+ The model segments thoracic vertebrae T1 to T12 (labels 1-12), lumbar vertebrae L1 to L5 (labels 13-17), and the sacrum (label 19), with optional vertebrae such as T13 (label 21) or L6 (label 18).
45
+ Training was conducted using 5 folds over 1000 epochs.
46
+ The [dataset_fingerprint.json](nnUNetTrainer__nnUNetResEncUNetLPlans__3d_fullres/dataset_fingerprint.json), [model plan with model configuration and hyperparameters](nnUNetTrainer__nnUNetResEncUNetLPlans__3d_fullres/plans.json), the [splits](splits_final.json) used for 5-fold cross-validation, and training logs of each fold (e.g. [fold 0](https://huggingface.co/fhofmann/VertebralBodiesCT-ResEncL/tree/main/nnUNetTrainer__nnUNetResEncUNetLPlans__3d_fullres/fold_0/logs) are available with the model.
47
+
48
+ ## Evaluation
49
+ The model's performance metrics were assessed in a separate test dataset, and its clinical value in identifying the third lumbar vertebra (L3) was explored in an [independent oncological dataset](https://doi.org/10.25737/SZ96-ZG60). Model metrics are available [here](metrics.md), and detailed results are described in [our paper](#Citation).
50
+
51
+ ## Technical Requirements
52
+ According to the [residual encoder UNet presets](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md), the nnU-Net ResEnc L requires a GPU with 24GB VRAM.
53
+
54
+ ## Citation
55
+ We deeply appreciate the foundational work of the TotalSegmentator and VerSe authors.
56
+ The generation of the labels would not have been possible without their open-source contributions.
57
+ If you use this model, please cite the following publications:
58
+ ```
59
+ Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D., Cyriac, J., Yang, S., Bach, M., Segeroth, M. (2023). TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology: Artificial Intelligence. https://doi.org/10.1148/ryai.230024
60
+ ```
61
+ ```
62
+ Sekuboyina, A., Husseini, M.E., Bayat, A., Löffler, M., Liebl, H., Li, H., Tetteh, G., Kukačka, J., Payer, C., Štern, D., Urschler, M., Chen, M., Cheng, D., Lessmann, N., Hu, Y., Wang, T., Yang, D., Xu, D., Ambellan, F., Amiranashvili, T., Ehlke, M., Lamecker, H., Lehnert, S., Lirio, M., Pérez de Olaguer, N., Ramm, H., Sahu, M., Tack, A., Zachow, S., Jiang, T., Ma, X., Angerman, C., Wang, X., Brown, K., Kirszenberg, A., Puybareau, É., Chen, D., Bai, Y., Rapazzo, B.H., Yeah, T., Zhang, A., Xu, S., Hou, F., He, Z., Zeng, C., Xiangshang, Z., Liming, X., Netherton, T.J., Mumme, R.P., Court, L.E., Huang, Z., He, C., Wang, L.-W., Ling, S.H., Huỳnh, L.D., Boutry, N., Jakubicek, R., Chmelik, J., Mulay, S., Sivaprakasam, M., Paetzold, J.C., Shit, S., Ezhov, I., Wiestler, B., Glocker, B., Valentinitsch, A., Rempfler, M., Menze, B.H., Kirschke, J.S. (2021). VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. Medical Image Analysis. https://doi.org/10.1016/j.media.2021.102166
63
+ ```
64
+ ```
65
+ Haubold, J., Baldini, G., Parmar, V., Schaarschmidt, B.M., Koitka, S., Kroll, L., van Landeghem, N., Umutlu, L., Forsting, M., Nensa, F., Hosch, R. (2023). BOA: A CT-Based Body and Organ Analysis for Radiologists at the Point of Care. Investigative Radiology. https://doi.org/10.1097/RLI.0000000000001040
66
+ ```
67
+
68
+ Since the model is based on the [nnU-Net framework](https://github.com/MIC-DKFZ/nnUNet) and the [residual encoder UNet presets](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md), please cite the following papers:
69
+ ```
70
+ Isensee, F., Jaeger, P.F., Kohl, S. A., Petersen, J., Maier-Hein, K.H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. https://doi.org/10.1038/s41592-020-01008-z
71
+ ```
72
+ ```
73
+ Isensee, F., Wald, T., Ulrich, C., Baumgartner, M. , Roy, S., Maier-Hein, K., Jaeger, P. (2024). nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation. arXiv preprint arXiv:2404.09556.
74
+ ```
75
+
76
+ Please cite our dataset:
77
+ ```
78
+ Hofmann F.O. et al. Thoracic & lumbar vertebral body labels corresponding to 1460 public CT scans. https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels/
79
+ ```
80
+
81
+ ## Contact
82
+ Any feedback, questions or recommendations? Feel free to [contact us](https://huggingface.co/fhofmann/VertebralBodiesCT-ResEncL/discussions)! 🤗
metrics.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model metrics
2
+
3
+ Model testing was performed in the held-out [test set of the dataset](https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels).
4
+ The Dice similarity index (Dice) and the normalized surface distance (NSD) were calculated for each label individually, and 95% confidence were computed using bootstrap resampling with 1000 iterations.
5
+
6
+ | Class ID | Class Description | Dice | NSD |
7
+ | --- | --- | --- | --- |
8
+ | 0 | background | 1.0 [1.0 - 1.0] | 0.999 [0.999 - 1.0] |
9
+ | 1 | T1 | 0.946 [0.928 - 0.958] | 0.979 [0.961 - 0.99] |
10
+ | 2 | T2 | 0.954 [0.94 - 0.965] | 0.993 [0.985 - 0.998] |
11
+ | 3 | T3 | 0.956 [0.939 - 0.969] | 0.989 [0.976 - 0.998] |
12
+ | 4 | T4 | 0.946 [0.917 - 0.968] | 0.979 [0.956 - 0.996] |
13
+ | 5 | T5 | 0.949 [0.923 - 0.968] | 0.981 [0.961 - 0.997] |
14
+ | 6 | T6 | 0.947 [0.919 - 0.969] | 0.978 [0.955 - 0.997] |
15
+ | 7 | T7 | 0.94 [0.908 - 0.966] | 0.97 [0.941 - 0.992] |
16
+ | 8 | T8 | 0.941 [0.912 - 0.966] | 0.969 [0.944 - 0.991] |
17
+ | 9 | T9 | 0.934 [0.903 - 0.959] | 0.962 [0.937 - 0.985] |
18
+ | 10 | T10 | 0.933 [0.906 - 0.959] | 0.963 [0.94 - 0.985] |
19
+ | 11 | T11 | 0.927 [0.897 - 0.955] | 0.951 [0.923 - 0.978] |
20
+ | 12 | T12 | 0.931 [0.9 - 0.958] | 0.955 [0.926 - 0.981] |
21
+ | 13 | L1 | 0.938 [0.907 - 0.963] | 0.959 [0.928 - 0.984] |
22
+ | 14 | L2 | 0.962 [0.943 - 0.978] | 0.982 [0.963 - 0.997] |
23
+ | 15 | L3 | 0.962 [0.94 - 0.978] | 0.981 [0.957 - 0.996] |
24
+ | 16 | L4 | 0.952 [0.923 - 0.971] | 0.968 [0.939 - 0.988] |
25
+ | 17 | L5 | 0.936 [0.91 - 0.955] | 0.958 [0.932 - 0.976] |
26
+ | 18 | L6 | 0.0 [0.0 - 0.0] | 0.0 [0.0 - 0.0] |
27
+ | 19 | Sacrum | 0.958 [0.951 - 0.965] | 0.983 [0.975 - 0.988] |
28
+ | 20 | Os coccygis | NA | NA |
29
+ | 21 | T13 | 0.0 [0.0 - 0.0] | 0.0 [0.0 - 0.0] |
splits_final.json ADDED
The diff for this file is too large to render. See raw diff