Please make sure to read the requirements and usage policies of the data and **give credit to the authors of the dataset**!
Please read the information from the homepage carefully and follow the rules and instructions provided by the original authors when using the data.
- Homepage: http://medicaldecathlon.com/
## Setup
0. Follow the installation instructions of nnDetection and create the data directories for the intended tasks, e.g. `Task003_Liver`.
1. Follow the instructions and usage policies to download the data and place the images, labels and dataset.json files inside the raw folder of the respective tasks, e.g. imagesTr -> `Task003_Liver / raw / imagesTr`, labelsTr -> `Task003_Liver / raw / labelsTr` and dataset.json -> `Task003_Liver / raw / dataset.json`
2. Run `python prepare.py [tasks]` in `projects / Task001_Decathlion / scripts` of the nnDetection repository, e.g. to prepare all tasks: `python prepare.py Task003_Liver Task007_Pancreas Task008_HepaticVessel Task010_Colon`
3. Run `nndet_seg2det [tasks]` to convert the semantic segmentation labels to instance segmentations, e.g. to convert all tasks `nndet_seg2det 003 007 008 010`
4. Run ... to download and replace the manually corrected labels. # TODO
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.
@@ -10,6 +10,6 @@ Please read the information from the homepage carefully and follow the rules and
...
@@ -10,6 +10,6 @@ Please read the information from the homepage carefully and follow the rules and
1. Follow the instructions and usage policies to download the data and place all the folders which contain the data and labels for each case into `Task011_Kits / raw`
1. Follow the instructions and usage policies to download the data and place all the folders which contain the data and labels for each case into `Task011_Kits / raw`
2. Run `python prepare.py` in `projects / Task011_Kits / scripts` of the nnDetection repository.
2. Run `python prepare.py` in `projects / Task011_Kits / scripts` of the nnDetection repository.
3. Run `nndet_seg2det 011` to convert the semantic segmentation labels to instance segmentations.
3. Run `nndet_seg2det 011` to convert the semantic segmentation labels to instance segmentations.
4. Run ... to download and replace the manually corrected labels.
4. Run ... to download and replace the manually corrected labels. # TODO
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.
0. Follow the installation instructions of nnDetection and create a data directory name `Task016_Luna`.
1. Follow the instructions and usage policies to download the data and place all the subsets into `Task016_Luna / raw`
2. Run `python prepare.py` in `projects / Task016_Luna / scripts` of the nnDetection repository.
The data is now converted to the correct format and the instructions from the nnDetection README can be used to train the networks.
Notes:
- since Luna is a 10 Fold cross validation, all 10 folds need to be run
- all runs should be run with the `--sweep` option and consolidation should be performed via the `--no_model -c copy` since we are not planning to predict a separate test set.
## Evaluation
1. Run `python prepare_eval_cpm.py [model_name]` to convert the predictions to the Luna format.
Note: The script needs access to the raw_splitted images.