@@ -18,7 +18,7 @@ The resulting self-configuring method, nnDetection, adapts itself without any ma
...
@@ -18,7 +18,7 @@ The resulting self-configuring method, nnDetection, adapts itself without any ma
We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further public data sets for a comprehensive evaluation of medical object detection methods.
We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further public data sets for a comprehensive evaluation of medical object detection methods.
# Installation
# Installation
## Docker Installation
## Docker
The easiest way to get started with nnDetection is the provided is to build a Docker Container with the provided Dockerfile.
The easiest way to get started with nnDetection is the provided is to build a Docker Container with the provided Dockerfile.
Please install docker and [nvidia-docker2](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) before continuing.
Please install docker and [nvidia-docker2](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) before continuing.
...
@@ -43,7 +43,7 @@ Warning:
...
@@ -43,7 +43,7 @@ Warning:
When running a training inside the container it is necessary to [increase the shared memory](https://stackoverflow.com/questions/30210362/how-to-increase-the-size-of-the-dev-shm-in-docker-container)(via --shm-size).
When running a training inside the container it is necessary to [increase the shared memory](https://stackoverflow.com/questions/30210362/how-to-increase-the-size-of-the-dev-shm-in-docker-container)(via --shm-size).
## Source Installation
## Source
1. Install CUDA (>10.1) and cudnn (make sure to select [compatible versions](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html)!)
1. Install CUDA (>10.1) and cudnn (make sure to select [compatible versions](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html)!)
2.[Optional] Depending on your GPU you might need to set `TORCH_CUDA_ARCH_LIST`, check [compute capabilities](https://developer.nvidia.com/cuda-gpus) here.
2.[Optional] Depending on your GPU you might need to set `TORCH_CUDA_ARCH_LIST`, check [compute capabilities](https://developer.nvidia.com/cuda-gpus) here.
3. Install [torch](https://pytorch.org/)(make sure to match the pytorch and CUDA versions!) (requires pytorch >1.7+)
3. Install [torch](https://pytorch.org/)(make sure to match the pytorch and CUDA versions!) (requires pytorch >1.7+)
...
@@ -69,7 +69,7 @@ Run the following command in the terminal (!not! in pytorch root folder) to veri
...
@@ -69,7 +69,7 @@ Run the following command in the terminal (!not! in pytorch root folder) to veri
To test the whole installation please run the Toy Dataset example.
To test the whole installation please run the Toy Dataset example.
</details>
</details>
<detailsclose>
<detailsclose>
...
@@ -123,18 +123,18 @@ It can be imported from `nndet.ptmodule` and example can be found in `nndet.ptmo
...
@@ -123,18 +123,18 @@ It can be imported from `nndet.ptmodule` and example can be found in `nndet.ptmo
</details>
</details>
# Experiments & Data
# Experiments & Data
The datasets used for our experiments are not hosted or maintained by us, please give credit to the authors of the datasets.
The datasets used for our experiments are not hosted or maintained by us, please give credit to the authors of the datasets.
Some of the labels were corrected in datasets which we converted and can be downloaded.
Some of the labels were corrected in datasets which we converted and can be downloaded.
The `Reproducing Experiments` section has an overview of multiple guides which explain the preparation of the datasets.
The `Reproducing Experiments` section has an overview of multiple guides which explain the preparation of the datasets.
## Toy Dataset
## Toy Dataset
Running `nndet_example` will automatically generate an example dataset with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the dataset).
Running `nndet_example` will automatically generate an example dataset with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the dataset).
```bash
```bash
# create data to test installation/environment (10 train 10 test)
# create data to test installation/environment (10 train 10 test)
nndet_example
nndet_example
# create full dataset for prototyping (1000 train 1000 test)
# create full dataset for prototyping (1000 train 1000 test)
nndet_example --full[--num_processes]
nndet_example --full[--num_processes]
```
```
...
@@ -146,20 +146,20 @@ After running the generation script follow the `Planning`, `Training` and `Infer
...
@@ -146,20 +146,20 @@ After running the generation script follow the `Planning`, `Training` and `Infer
nnDetection relies on a standardized input format which is very similar to the [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) format and allows easy integration of new datasets.
nnDetection relies on a standardized input format which is very similar to the [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) format and allows easy integration of new datasets.
More details about the format can be found below.
More details about the format can be found below.
### Folders
### Folders
All datasets should reside inside `Task[Number]_[Name]` folder inside the specified detection data folder (et the path to this folder with the `det_data` environment flag).
All datasets should reside inside `Task[Number]_[Name]` folder inside the specified detection data folder (et the path to this folder with the `det_data` environment flag).
To avoid conflicts with our provided pretrained models we recommend to use task numbers starting from 100.
To avoid conflicts with our provided pretrained models we recommend to use task numbers starting from 100.
An overview is provided below ([Name] symbolise folder, `-` symbolise files, indents refer to substructures)
An overview is provided below ([Name] symbolise folder, `-` symbolise files, indents refer to substructures)
...
@@ -186,8 +186,8 @@ ${det_data}
...
@@ -186,8 +186,8 @@ ${det_data}
...
...
```
```
### Dataset Info
### Dataset Info
`dataset.yaml` or `dataset.json` provides general information about the dataset:
`dataset.yaml` or `dataset.json` provides general information about the dataset:
Note: [Important] Classes and modalities start with index 0!
Note: [Important] Classes and modalities start with index 0!
```yaml
```yaml
task:Task000D3_Example
task:Task000D3_Example
...
@@ -199,18 +199,18 @@ dim: 3 # number of spatial dimensions of the data
...
@@ -199,18 +199,18 @@ dim: 3 # number of spatial dimensions of the data
target_class:# define class of interest for patient level evaluations # TODO: check if this should be included
target_class:# define class of interest for patient level evaluations # TODO: check if this should be included
test_labels:True# manually splitted test set
test_labels:True# manually splitted test set
labels:# classes of dataset; need to start at 0
labels:# classes of dataset; need to start at 0
"0":"Square"
"0":"Square"
"1":"SquareHole"
"1":"SquareHole"
modalities:# modalities of dataset; need to start at 0
modalities:# modalities of dataset; need to start at 0
"0":"CT"
"0":"CT"
```
```
### Image Format
### Image Format
nnDetection uses the same image format as nnU-Net.
nnDetection uses the same image format as nnU-Net.
Each case consists of at least one 3D nifty file with one modalityand are saved in the `images` folders.
Each case consists of at least one 3D nifty file with one modalityand are saved in the `images` folders.
If multiple modalities are available, each modalities uses a separate file and the sequence at the end of the name indicates the modality (corresponds to the number specified in the dataset file).
If multiple modalities are available, each modalities uses a separate file and the sequence at the end of the name indicates the modality (corresponds to the number specified in the dataset file).
An example with two modalities could look like this:
An example with two modalities could look like this:
```text
```text
...
@@ -273,7 +273,7 @@ After planning and preprocessing the resulting data folder structure should look
...
@@ -273,7 +273,7 @@ After planning and preprocessing the resulting data folder structure should look
[analysis] # some plots to visualize properties of the underlying data set
[properties] # sufficient for new plans
[properties] # sufficient for new plans
[labelsTr] # labels in original format (original spacing)
[labelsTr] # labels in original format (original spacing)
[labelsTs] # optional
[labelsTs] # optional
...
@@ -283,7 +283,7 @@ After planning and preprocessing the resulting data folder structure should look
...
@@ -283,7 +283,7 @@ After planning and preprocessing the resulting data folder structure should look
- {name of plan}.pkl e.g. D3V001_3d.pkl
- {name of plan}.pkl e.g. D3V001_3d.pkl
```
```
Befor starting the training copy the data (Task Folder, dataset info and preprocessed folder are needed) to a SSD (highly recommended) and unpack the image data with
Befor starting the training copy the data (Task Folder, dataset info and preprocessed folder are needed) to a SSD (highly recommended) and unpack the image data with
```bash
```bash
nndet_unpack [path] [num_processes]
nndet_unpack [path] [num_processes]
...
@@ -371,7 +371,7 @@ If a self-made test set was used, evaluation can be performed by invoking `nndet
...
@@ -371,7 +371,7 @@ If a self-made test set was used, evaluation can be performed by invoking `nndet
## nnU-Net for Detection
## nnU-Net for Detection
Besides nnDetection we also include the scripts to prepare and evaluate nnU-Net in the context of obejct detection.
Besides nnDetection we also include the scripts to prepare and evaluate nnU-Net in the context of obejct detection.
Both frameworks need to be configured correctly before running the scripts to assure correctness.
Both frameworks need to be configured correctly before running the scripts to assure correctness.
After preparing the dataset in the nnDetection format (which is a superset of the nnU-Net format) it is possible to export it to nnU-Net via `scripts/nnunet/nnunet_export.py`. Since nnU-Net needs task ids without any additions it may be necessary to overwrite the task name via the `-nt` option for some dataets (e.g. `Task019FG_ADAM` needs to be renamed to `Task019_ADAM`).
After preparing the dataset in the nnDetection format (which is a superset of the nnU-Net format) it is possible to export it to nnU-Net via `scripts/nnunet/nnunet_export.py`. Since nnU-Net needs task ids without any additions it may be necessary to overwrite the task name via the `-nt` option for some dataets (e.g. `Task019FG_ADAM` needs to be renamed to `Task019_ADAM`).
Follow the usual nnU-Net preprocessing and training pipeline to generate the needed models.
Follow the usual nnU-Net preprocessing and training pipeline to generate the needed models.
Use the `--npz` option during training to save the predicted probabilities which are needed to generate the detection results.
Use the `--npz` option during training to save the predicted probabilities which are needed to generate the detection results.
After determining the best ensemble configuration from nnU-Net pass all paths to `scripts/nnunet/nnunet_export.py` which will ensemble and postprocess the predictions for object detection.
After determining the best ensemble configuration from nnU-Net pass all paths to `scripts/nnunet/nnunet_export.py` which will ensemble and postprocess the predictions for object detection.
...
@@ -417,9 +417,9 @@ In many cases this limitation can be circumvented by converting the bounding box
...
@@ -417,9 +417,9 @@ In many cases this limitation can be circumvented by converting the bounding box
</details>
</details>
<detailsclose>
<detailsclose>
<summary>Mask RCNN and 2D Datasets</summary>
<summary>Mask RCNN and 2D Datasets</summary>
<br>
<br>
2D datasets and Mask R-CNN are not supported in the first release.
2D datasets and Mask R-CNN are not supported in the first release.
@@ -11,4 +11,4 @@ Please read the information from the homepage carefully and follow the rules and
...
@@ -11,4 +11,4 @@ Please read the information from the homepage carefully and follow the rules and
1. Follow the instructions and usage policies to download the data and place the data and labels at the following locations: data -> `Task017_CADA / raw / train_dataset` and labels -> `Task017_CADA / raw / train_mask_images`
1. Follow the instructions and usage policies to download the data and place the data and labels at the following locations: data -> `Task017_CADA / raw / train_dataset` and labels -> `Task017_CADA / raw / train_mask_images`
2. Run `python prepare.py` in `projects / Task017_CADA / scripts` of the nnDetection repository.
2. Run `python prepare.py` in `projects / Task017_CADA / scripts` of the nnDetection repository.
The data is now prepared in the correct format and the instructions from the nnDetection README can be used to train the networks.
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.
Please make sure to read the requirements and usage policies of the data befor using it and **give credit to the authors of the dataset**!
Please make sure to read the requirements and usage policies of the data and **give credit to the authors of the dataset**!
Please read the information from the homepage carefully and follow the rules and instructions provided by the original authors when using the data.
Please read the information from the homepage carefully and follow the rules and instructions provided by the original authors when using the data.
- Homepage: http://adam.isi.uu.nl/
- Homepage: http://adam.isi.uu.nl/
...
@@ -13,4 +13,5 @@ Please read the information from the homepage carefully and follow the rules and
...
@@ -13,4 +13,5 @@ Please read the information from the homepage carefully and follow the rules and
3. Run `python split.py` in `projects / Task019_ADAM / scripts` of the nnDetection repository.
3. Run `python split.py` in `projects / Task019_ADAM / scripts` of the nnDetection repository.
4. [Info]: The provided instructions will automatically create a patient stratified random split. We used a random split for our challenge submission. By renaming the provided split file in the `preprocessed` folders, nnDetection will automatically create a random split.
4. [Info]: The provided instructions will automatically create a patient stratified random split. We used a random split for our challenge submission. By renaming the provided split file in the `preprocessed` folders, nnDetection will automatically create a random split.
The data is now prepared in the correct format and the instructions from the nnDetection README can be used to train the networks.
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.
Please make sure to read the requirements and usage policies of the data and **give credit to the authors of the dataset**!
Please read the information from the homepage carefully and follow the rules and instructions provided by the original authors when using the data.
- Homepage: https://ribfrac.grand-challenge.org/
- Subtask: Task 1
## Setup
0. Follow the installation instructions of nnDetection and create a data directory name `Task020FG_RibFrac`. We added FG to the ID to indicate that we don't distinguish the different classes. (even if you prepare the data set with classes, the data needs to be placed inside that directory)
1. Follow the instructions and usage policies to download the data and copy the data/labels/csv files to the following locations:
data -> `Task020FG_RibFrac / raw / imagesTr`; labels -> `Task020FG_RibFrac / raw / labelsTr`; csv files -> `Task020FG_RibFrac / raw`
2. Run `python prepare.py` in `projects / Task020FG_RibFrac / scripts` of the nnDetection repository.
Note: If no manual split is created, nnDetection will create a random 5Fold split which we used for results.
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.