@@ -125,7 +125,7 @@ It can be imported from `nndet.ptmodule` and example can be found in `nndet.ptmo
...
@@ -125,7 +125,7 @@ It can be imported from `nndet.ptmodule` and example can be found in `nndet.ptmo
# Experiments & Data
# Experiments & Data
The data sets used for our experiments are not hosted or maintained by us, please give credit to the authors of the data sets.
The data sets used for our experiments are not hosted or maintained by us, please give credit to the authors of the data sets.
Some of the labels were corrected in data sets which we converted and can be downloaded.
Some of the labels were corrected in data sets which we converted and can be downloaded.
The `Reproducing Experiments` section has an overview of multiple guides which explain the preparation of the data sets.
The `Experiments` section has an overview of multiple guides which explain the preparation of the data sets.
## Toy Data set
## Toy Data set
Running `nndet_example` will automatically generate an example data set with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the data set).
Running `nndet_example` will automatically generate an example data set with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the data set).
The full problem is very easy and the final results should be near perfect.
The full problem is very easy and the final results should be near perfect.
After running the generation script follow the `Planning`, `Training` and `Inference` instructions below to construct the whole nnDetection pipeline.
After running the generation script follow the `Planning`, `Training` and `Inference` instructions below to construct the whole nnDetection pipeline.
## Reproducing Experiments
## Experiments
Besides the self-configuring method, nnDetection acts as a standard interface for many datasets.
We provide guides to prepare all datasets from our evaluation to the correct and make it easy to reproduce our resutls.
Furthermore, we provide pretrained models which can be used without investing large amounts of compute to rerun our experiments (see Section `Pretrained Models`).
0. Follow the installation instructions of nnDetection and create a data directory name `Task011_Kits`.
1. Follow the instructions and usage policies to download the data and place all the folders which contain the data and labels for each case into `Task011_Kits / raw`
2. Run `python prepare.py` in `projects / Task011_Kits / scripts` of the nnDetection repository.
3. Run `nndet_seg2det 011` to convert the semantic segmentation labels to instance segmentations.
4. Run ... to download and replace the manually corrected labels.
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.