Commit 3416b80c authored by mibaumgartner's avatar mibaumgartner
Browse files
parents d710afc9 ddb4d304
......@@ -145,6 +145,9 @@ nndet_example --full [--num_processes]
The `full` problem is very easy and the final results should be near perfect.
After running the generation script follow the `Planning`, `Training` and `Inference` instructions below to construct the whole nnDetection pipeline.
## Guides
Work in progress
## Experiments
Besides the self-configuring method, nnDetection acts as a standard interface for many data sets.
We provide guides to prepare all data sets from our evaluation to the correct and make it easy to reproduce our resutls.
......
......@@ -49,20 +49,20 @@ ADAM Results are listed under Benchmarks
5 Fold Cross Validation
| Model | Lymph Nodes |
|:-----------:|:-----------:|
| nnDetection | 0.205 |
| nnUNetPlus | 0.162 |
| nnUNetBasic | 0.159 |
| Model | Abdominal Lymph Nodes | Mediastinal Lymph Nodes |
|:-----------:|:---------------------:|:-----------------------:|
| nnDetection | 0.493 | 0.440 |
| nnUNetPlus | 0.378 | 0.334 |
| nnUNetBasic | 0.360 | 0.302 |
 
Test Split
| Model | Lymph Nodes |
|:-----------:|:-----------:|
| nnDetection | 0.270 |
| nnUNetPlus | 0.169 |
| Model | Abdominal Lymph Nodes | Mediastinal Lymph Nodes |
|:-----------:|:---------------------:|:-----------------------:|
| nnDetection | 0.470 | 0.500 |
| nnUNetPlus | 0.311 | 0.342 |
Luna results are listed under Benchmarks
......@@ -70,6 +70,18 @@ Luna results are listed under Benchmarks
</div>
#### References
- S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P.Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman, et al. The lungimage database consortium (lidc) and image database resource initiative (idri):a completed reference database of lung nodules on ct scans.Medical physics,38(2):915–931, 2011
- L. Jin, J. Yang, K. Kuang, B. Ni, Y. Gao, Y. Sun, P. Gao, W. Ma, M. Tan, H. Kang,J. Chen, and M. Li. Deep-learning-assisted detection and segmentation of ribfractures from CT scans: Development and validation of FracNet. 62. Publisher:Elsevier
- C. Tabea Kossen, L. Kaufhold, M. H ̈ullebrand, J.-M. Kuhnigk, J. Br ̈uhning,J. Schaller, B. Pfahringer, A. Spuler, L. Goubergrits, and A. Hennemuth. Cerebralaneurysm detection and analysis, Mar. 2020
- K. Timmins, E. Bennink, I. van der Schaaf, B. Velthuis, Y. Ruigrok, and H. Kuijf.Intracranial Aneurysm Detection and Segmentation Challenge, Mar. 2020.
- N. Heller, N. Sathianathen, A. Kalapara, E. Walczak, K. Moore, H. Kaluzniak,J. Rosenberg, P. Blake, Z. Rengel, M. Oestreich, et al. The kits19 challenge data:300 kidney tumor cases with clinical context, ct semantic segmentations, and sur-gical outcomes.arXiv preprint arXiv:1904.00445, 2019
- G. Litjens, O. Debats, J. Barentsz, N. Karssemeijer, and H. Huisman. Computer-aided detection of prostate cancer in mri.IEEE TMI, 33(5):1083–1092, 2014
- R. Cuocolo, A. Comelli, A. Stefano, V. Benfante, N. Dahiya, A. Stanzione,A. Castaldo, D. R. D. Lucia, A. Yezzi, and M. Imbriaco. Deep learning whole-gland and zonal prostate segmentation on a public mri dataset.Journal of Mag-netic Resonance Imaging, 2021.
- A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. Van Ginneken,A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze, et al. A large anno-tated medical image dataset for the development and evaluation of segmentationalgorithms.arXiv preprint arXiv:1902.09063, 2019.
- H. R. Roth, L. Lu, A. Seff, K. M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey,and R. M. Summers. A new 2.5 d representation for lymph node detection usingrandom sets of deep convolutional neural network observations. InMICCAI, pages520–527. Springer, 2014
- A. Seff, L. Lu, A. Barbu, H. Roth, H.-C. Shin, and R. M. Summers. Leveraging mid-level semantic boundary cues for automated lymph node detection. InMICCAI,pages 53–61. Springer, 2015
## Benchmarks
### Luna
Disclaimer:
......@@ -87,7 +99,6 @@ Zhu et al. (2018) | 0.692 | 0.769 | 0.824 | 0.865 | 0.893 | 0.917 |
Wang et al. (2018) | 0.676 | 0.776 | 0.879 | 0.949 | 0.958 | 0.958 | 0.958 | 0.878 |
Ding et al. (2017) | 0.748 | 0.853 | 0.887 | 0.922 | 0.938 | 0.944 | 0.946 | 0.891 |
Khosravan et al. (2018) | 0.709 | 0.836 | 0.921 | 0.953 | 0.953 | 0.953 | 0.953 | 0.897 |
Cao et al. (2020) | 0.868 | 0.900 | 0.913 | 0.915 | 0.916 | 0.931 | 0.932 | 0.911 |
Liu et al. (2019) | 0.848 | 0.876 | 0.905 | 0.933 | 0.943 | 0.957 | 0.970 | 0.919 |
Song et al. (2020) | 0.723 | 0.838 | 0.887 | 0.911 | 0.928 | 0.934 | 0.948 | 0.881 |
nnDetection v0.1 (ours, 2021) | 0.812 | 0.885 | 0.927 | 0.950 | 0.969 | 0.979 | 0.985 | 0.930 |
......@@ -96,12 +107,15 @@ Cao et al. (2020) + FPR | 0.848 | 0.899 | 0.925 | 0.936 | 0.949 | 0.957 |
Liu et al. (2019) + FPR | 0.904 | 0.914 | 0.933 | 0.957 | 0.971 | 0.971 | 0.971 | 0.952 |
<sup>*</sup> Some of the other methods also use FPR stages but the methods listed below report results w. and wo. FPR.
&nbsp;
</div>
#### References (no particular oder)
- A. A. A. Setio, A. Traverso, T. de Bel, M. S. Berens, C. van den Bogaard, P. Cerello,H. Chen, Q. Dou, M. E. Fantacci, B. Geurts, R. van der Gugten, P. A. Heng,B. Jansen, M. M. de Kaste, V. Kotov, J. Y.-H. Lin, J. T. Manders, A. S ́o ̃nora-Mengana, J. C. Garc ́ıa-Naranjo, E. Papavasileiou, M. Prokop, M. Saletta, C. M.Schaefer-Prokop, E. T. Scholten, L. Scholten, M. M. Snoeren, E. L. Torres, J. Van-demeulebroucke, N. Walasek, G. C. Zuidhof, B. van Ginneken, and C. Jacobs.Validation, comparison, and combination of algorithms for automatic detection ofpulmonary nodules in computed tomography images: The luna16 challenge.Me-dIA, 42:1–13, 2017.
- Z. Gong, D. Li, J. Lin, Y. Zhang and K. -M. Lam, "Towards Accurate Pulmonary Nodule Detection by Representing Nodules as Points With High-Resolution Network," in IEEE Access, vol. 8, pp. 157391-157402, 2020, doi: 10.1109/ACCESS.2020.3019104
- Q. Dou, H. Chen, L. Yu, J. Qin and P. Heng, "Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection," in IEEE Transactions on Biomedical Engineering, vol. 64, no. 7, pp. 1558-1567, July 2017, doi: 10.1109/TBME.2016.2613502.
- Gupta, A., Saar, T., Martens, O. and Moullec, Y.L. (2018), Automatic detection of multisize pulmonary nodules in CT images: Large-scale validation of the false-positive reduction step. Med. Phys., 45: 1135-1149. https://doi.org/10.1002/mp.12746
- J. Ding, A. Li, Z. Hu, and L. Wang. Accurate pulmonary nodule detection in computed tomography images using deep convolutional neural networks. In MICCAI, pages 559–567. Springer, 2017
- Q. Dou, H. Chen, Y. Jin, H. Lin, J. Qin, and P.-A. Heng. Automated pulmonary nodule detection via 3d convnets with online sample filtering and hybrid-loss residual learning. In MICCAI, pages 630–638. Springer, 2017
- N. Khosravan and U. Bagci. S4nd: Single-shot single-scale lung nodule detection. In MICCAI, pages 794–802. Springer, 2018.
......
docs/results/source/v001/luna.png

281 KB | W: | H:

docs/results/source/v001/luna.png

268 KB | W: | H:

docs/results/source/v001/luna.png
docs/results/source/v001/luna.png
docs/results/source/v001/luna.png
docs/results/source/v001/luna.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -22,6 +22,7 @@ from loguru import logger
from nndet.arch.conv import conv_kwargs_helper
from nndet.utils import to_dtype
from nndet.utils.info import experimental
class BaseUFPN(nn.Module):
......@@ -417,6 +418,7 @@ class UFPNModular(BaseUFPN):
class PAUFPN(UFPNModular):
@experimental
def __init__(self,
conv: Callable,
strides: Sequence[int],
......
......@@ -39,7 +39,7 @@ import SimpleITK as sitk
from nndet.core.boxes import box_iou_np, box_size_np
from nndet.io.load import load_pickle, save_json
from nndet.utils.info import maybe_verbose_iterable
from nndet.utils.info import maybe_verbose_iterable, experimental, deprecate
def collect_overview(prediction_dir: Path, gt_dir: Path,
......@@ -366,6 +366,7 @@ def plot_sizes_bar(all_pred, all_target, all_boxes, iou, score,
return fig, ax
@experimental
def run_analysis_suite(prediction_dir: Path, gt_dir: Path, save_dir: Path):
for iou, score in maybe_verbose_iterable(list(product([0.1, 0.5], [0.1, 0.5]))):
_save_dir = save_dir / f"iou_{iou}_score_{score}"
......@@ -415,6 +416,7 @@ def run_analysis_suite(prediction_dir: Path, gt_dir: Path, save_dir: Path):
plt.close()
@deprecate(deprecate="v0.1", remove="v0.2")
def convert_box_to_nii_meta(pred_boxes: Tensor,
pred_scores: Tensor,
pred_labels: Tensor,
......
......@@ -33,6 +33,8 @@ from typing import Union, Optional
from pathlib import Path
from git import Repo, InvalidGitRepositoryError
import functools
import inspect
class SuppressPrint:
def __enter__(self):
......@@ -44,6 +46,60 @@ class SuppressPrint:
sys.stdout = self._original_stdout
def deprecate(
replacement: Optional[str] = None,
deprecate: Optional[str] = None,
remove: Optional[str] = None,
):
"""
Deprecate functions and classes
Args:
replacement: Optional replacement of old element. if No
replacement is provided (None) this will expect that the function
will be removed completely.
deprecate: Optional version from when element is deprecated.
remove: Optional version from when element will be removed.
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if inspect.isclass(func):
func_name = func.__class__.__name__
else:
func_name = func.__name__
time_str = "now" if deprecate is None else deprecate
s = f"{func_name} is deprecated from {time_str}!"
if remove is not None:
s += f" It will be removed from nnDetection from {remove}"
if replacement is not None:
s += f" The replacement is {replacement}."
else:
s += f" There will be no replacement."
logger.warning(s)
return func(*args, **kwargs)
return wrapper
return decorator
def experimental(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if inspect.isclass(func):
func_name = func.__class__.__name__
else:
func_name = func.__qualname__
logger.warning(f"This feature ({func_name}) is experimental! "
"It might not implement all features or is only a simplification!")
return func(*args, **kwargs)
return wrapper
def get_requirements():
"""
Get all installed packages from currently active environment
......
import os
import shutil
import sys
from itertools import repeat
from multiprocessing import Pool
from pathlib import Path
from nndet.utils.check import env_guard
import numpy as np
from loguru import logger
import SimpleITK as sitk
from nndet.io import save_json
from nndet.io.prepare import create_test_split
from nndet.io.itk import load_sitk_as_array
from nndet.utils.info import maybe_verbose_iterable
def prepare_image(
case_id: str,
base_dir: Path,
mask_dir: Path,
raw_splitted_dir: Path,
):
logger.info(f"Processing {case_id}")
root_data_dir = base_dir / case_id
patient_data_dir = []
for root, dirs, files in os.walk(root_data_dir, topdown=False):
if any([f.endswith(".dcm") for f in files]):
patient_data_dir.append(Path(root))
assert len(patient_data_dir) == 1
patient_data_dir = patient_data_dir[0]
reader = sitk.ImageSeriesReader()
dicom_names = reader.GetGDCMSeriesFileNames(str(patient_data_dir))
reader.SetFileNames(dicom_names)
data_itk = reader.Execute()
patient_label_dir = mask_dir / case_id
label_path = [p for p in patient_label_dir.iterdir() if p.is_file() and p.name.endswith(".nii.gz")]
assert len(label_path) == 1
label_path = label_path[0]
mask = load_sitk_as_array(label_path)[0]
instances = np.unique(mask)
instances = instances[instances > 0]
meta = {"instances": {str(int(i)): 0 for i in instances}}
meta["original_path_data"] = str(patient_data_dir)
meta["original_path_label"] = str(label_path)
save_json(meta, raw_splitted_dir / "labelsTr" / f"{case_id}.json")
sitk.WriteImage(data_itk, str(raw_splitted_dir / "imagesTr" / f"{case_id}_0000.nii.gz"))
shutil.copy(label_path, raw_splitted_dir / "labelsTr" / f"{case_id}.nii.gz")
@env_guard
def main():
det_data_dir = Path(os.getenv("det_data"))
task_data_dir = det_data_dir / "Task025_LymphNodes"
source_data_base = task_data_dir / "raw"
if not source_data_base.is_dir():
raise RuntimeError(f"{source_data_base} should contain the raw data but does not exist.")
raw_splitted_dir = task_data_dir / "raw_splitted"
(raw_splitted_dir / "imagesTr").mkdir(parents=True, exist_ok=True)
(raw_splitted_dir / "labelsTr").mkdir(parents=True, exist_ok=True)
(raw_splitted_dir / "imagesTs").mkdir(parents=True, exist_ok=True)
(raw_splitted_dir / "labelsTs").mkdir(parents=True, exist_ok=True)
logger.remove()
logger.add(sys.stdout, format="{level} {message}", level="DEBUG")
logger.add(raw_splitted_dir.parent / "prepare.log", level="DEBUG")
meta = {
"name": "Lymph Node TCIA",
"task": "Task025_LymphNodes",
"target_class": None,
"test_labels": True,
"labels": {
"0": "LymphNode",
},
"modalities": {
"0": "CT",
},
"dim": 3,
}
save_json(meta, raw_splitted_dir.parent / "dataset.json")
base_dir = source_data_base / "CT Lymph Nodes"
mask_dir = source_data_base / "MED_ABD_LYMPH_MASKS"
case_ids = sorted([p.name for p in base_dir.iterdir() if p.is_dir()])
logger.info(f"Found {len(case_ids)} cases in {base_dir}")
for cid in maybe_verbose_iterable(case_ids):
prepare_image(
case_id=cid,
base_dir=base_dir,
mask_dir=mask_dir,
raw_splitted_dir=raw_splitted_dir,
)
# with Pool(processes=6) as p:
# p.starmap(
# prepare_image,
# zip(
# case_ids,
# repeat(base_dir),
# repeat(mask_dir),
# repeat(raw_splitted_dir)
# )
# )
create_test_split(raw_splitted_dir,
num_modalities=len(meta["modalities"]),
test_size=0.3,
random_state=0,
shuffle=True,
)
if __name__ == '__main__':
main()
......@@ -138,7 +138,12 @@ def main():
target_dir = model_dir / "consolidated"
logger.remove()
logger.add(sys.stdout, format="{level} {message}", level="INFO")
logger.add(
sys.stdout,
format="<level>{level} {message}</level>",
level="INFO",
colorize=True,
)
logger.add(Path(target_dir) / "consolidate.log", level="DEBUG")
logger.info(f"looking for models in {model_dir}")
......
......@@ -228,18 +228,17 @@ def import_single_case(logits_source: Path,
properties_file = logits_source.parent / f"{case_name}.pkl"
probs = np.load(str(logits_source))["softmax"]
if properties_file.is_file():
properties_dict = load_pickle(properties_file)
bbox = properties_dict.get('crop_bbox')
shape_original_before_cropping = properties_dict.get('original_size_of_raw_data')
properties_dict = load_pickle(properties_file)
bbox = properties_dict.get('crop_bbox')
shape_original_before_cropping = properties_dict.get('original_size_of_raw_data')
if bbox is not None:
tmp = np.zeros((probs.shape[0], *shape_original_before_cropping))
for c in range(3):
bbox[c][1] = np.min((bbox[c][0] + probs.shape[c + 1], shape_original_before_cropping[c]))
if bbox is not None:
tmp = np.zeros((probs.shape[0], *shape_original_before_cropping))
for c in range(3):
bbox[c][1] = np.min((bbox[c][0] + probs.shape[c + 1], shape_original_before_cropping[c]))
tmp[:, bbox[0][0]:bbox[0][1], bbox[1][0]:bbox[1][1], bbox[2][0]:bbox[2][1]] = probs
probs = tmp
tmp[:, bbox[0][0]:bbox[0][1], bbox[1][0]:bbox[1][1], bbox[2][0]:bbox[2][1]] = probs
probs = tmp
res = instance_results_from_seg(probs,
aggregation=aggregation,
......@@ -253,6 +252,11 @@ def import_single_case(logits_source: Path,
instances_target = logits_target_dir / f"{case_name}_instances.pkl"
boxes = {key: res[key] for key in ["pred_boxes", "pred_labels", "pred_scores"]}
boxes["original_size_of_raw_data"] = properties_dict["original_size_of_raw_data"]
boxes["itk_origin"] = properties_dict["itk_origin"]
boxes["itk_direction"] = properties_dict["itk_direction"]
boxes["itk_spacing"] = properties_dict["itk_spacing"]
save_pickle(boxes, detection_target)
if save_iseg:
instances = {key: res[key] for key in ["pred_instances", "pred_labels", "pred_scores"]}
......@@ -341,19 +345,23 @@ if __name__ == '__main__':
save_seg = args.save_seg
save_iseg = args.save_iseg
# select corresponding nnDetection task
nnunet_dir = nnunet_dirs[0]
task_names = [n for n in PurePath(nnunet_dir).parts if "Task" in n]
if len(task_names) > 1:
logger.error(f"Found multiple task names trying to continue with {task_names[-1]}")
logger.info(f"Found nnunet task {task_names[-1]} in nnunet path")
nnunet_task = task_names[-1]
if task is None:
# select corresponding nnDetection task
task_names = [n for n in PurePath(nnunet_dir).parts if "Task" in n]
if len(task_names) > 1:
logger.error(f"Found multiple task names trying to continue with {task_names[-1]}")
if len(task_names) == 0:
logger.error(f"Could not derive task name from path please use "
"-t/--task to provide the name via cmd line!")
logger.info(f"Found nnunet task {task_names[-1]} in nnunet path")
nnunet_task = task_names[-1]
logger.info(f"Using nnunet task {nnunet_task} as detection task id")
task = nnunet_task
else:
task = get_task(task, name=True)
task_dir = Path(os.getenv("det_models")) / task
initialize_config_module(config_module="nndet.conf")
cfg = compose(task, "config.yaml", overrides=[])
......@@ -436,6 +444,10 @@ if __name__ == '__main__':
for cid in case_ids:
copy_and_ensemble_test(cid, nnunet_dirs, nnunet_prediction_dir)
# copy properties
for p in [p for p in nnunet_dir.iterdir() if p.name.endswith(".pkl")]:
shutil.copyfile(p, nnunet_prediction_dir / p.name)
postprocessing_settings = load_pickle(nndet_unet_dir / "postprocessing.pkl")
target_dir = nndet_unet_dir / "test_predictions"
......
......@@ -63,7 +63,12 @@ def run(cfg: dict,
prediction_dir = training_dir / "test_predictions"
logger.remove()
logger.add(sys.stdout, format="{level} {message}", level="INFO")
logger.add(
sys.stdout,
format="<level>{level} {message}</level>",
level="INFO",
colorize=True,
)
logger.add(Path(training_dir) / "inference.log", level="INFO")
if process:
......
......@@ -201,7 +201,12 @@ def _train(
{"trainer": OmegaConf.to_container(cfg["trainer_cfg"], resolve=True)}))
logger.remove()
logger.add(sys.stdout, format="{level} {message}", level="INFO")
logger.add(
sys.stdout,
format="<level>{level} {message}</level>",
level="INFO",
colorize=True,
)
log_file = Path(os.getcwd()) / "train.log"
logger.add(log_file, level="INFO")
logger.info(f"Log file at {log_file}")
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment