Commit a960f549 authored by mibaumgartner's avatar mibaumgartner
Browse files

init prototype

parents
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
*.vscode
*.simg
*.sif
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# dotenv
.env
# virtualenv
venv/
ENV/
# Spyder project settings
.spyderproject
# Rope project settings
.ropeproject
*.memmap
*.png
*.zip
*.npz
*.npy
*.jpg
*.jpeg
.idea
*.txt
.idea/*
*.png
*.nii.gz
*.nii
*.tif
*.bmp
*.pkl
*.xml
*.pkl
*.pdf
*.png
*.jpg
*.jpeg
*.model
.DS_Store
FROM nvidia/cuda:11.1.1-devel-ubuntu20.04
ARG env_det_num_threads=6
ARG env_det_verbose=1
# Setup environment variables
ENV TORCH_CUDA_ARCH_LIST=6.1+PTX;7.0+PTX;7.5+PTX FORCE_CUDA=1
ENV det_data=/opt/data det_models=/opt/models det_num_threads=$env_det_num_threads det_verbose=$env_det_verbose OMP_NUM_THREADS=1
# Install some tools
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && apt-get install -y \
git \
cmake \
make \
wget \
gnupg \
build-essential \
software-properties-common \
gdb \
ninja-build
# Setup miniconda and create a new python environment with python 3.7
RUN wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh \
&& chmod +x miniconda.sh \
&& ./miniconda.sh -b -p /opt/miniconda \
&& rm ./miniconda.sh \
&& ln -s /opt/miniconda/bin/activate /activate \
&& . /activate \
&& pip install numpy \
&& pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
# Install own code
COPY ./requirements.txt .
RUN mkdir ${det_data} \
&& mkdir ${det_models} \
&& mkdir -p /opt/code/nndet \
&& . /activate \
&& pip install -r requirements.txt \
&& pip install hydra-core --upgrade --pre \
&& pip install git+https://github.com/mibaumgartner/pytorch_model_summary.git
WORKDIR /opt/code/nndet
COPY . .
RUN . /activate && pip install -v -e .
<div align="center">
<img src=docs/source/nnDetection.svg width="600px">
![Version](https://img.shields.io/badge/nnDetection-v1.0-blue)
![Python](https://img.shields.io/badge/python-3.8-orange)
![CUDA](https://img.shields.io/badge/CUDA-10.1%2F10.2%2F11.0-green)
![license](https://img.shields.io/badge/License-Apache%202.0-red.svg)
</div>
# Installation
1. Install CUDA (>10.1) and cudnn (make sure to select [compatible versions](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html)!)
2. [Optional] Depending on your GPU you might need to set `TORCH_CUDA_ARCH_LIST`, check [compute capabilities](https://developer.nvidia.com/cuda-gpus) here.
3. Install [torch](https://pytorch.org/) (make sure to match the pytorch and CUDA versions!) (requires pytorch >1.7+)
4. Install [torchvision](https://github.com/pytorch/vision) (make sure to match the versions!)
5. Clone nnDetection, `cd [path_to_repo]` and `pip install -e .`
6. Upgrade hydra to next release: `pip install hydra-core --upgrade --pre`
7. Set environment variables (more info can be found below):
- `det_data`: [required] Path to the source directory where all the data will be located
- `det_models`: [required] Path to directory where all models will be saved
- `OMP_NUM_THREADS=1` : [required] Needs to be set! Otherwise bad things will happen... Refer to batchgenerators documentation.
- `det_num_threads`: [recommended] Number processes to use for augmentation (at least 6, default 12)
- `det_verbose`: [optional] Can be used to deactivate progress bars (activated by default)
- `MLFLOW_TRACKING_URI`: [optional] Specify the logging directory of mlflow. Refer to the [mlflow documentation](https://www.mlflow.org/docs/latest/tracking.html) for more information.
Note: nnDetection was developed on Linux => Windows is not supported.
<details close>
<summary>Test Installation</summary>
<br>
Run the following command in the terminal (!not! in pytorch root folder) to verify that the compilation of the C++/CUDA code was successfull:
```bash
python -c "import torch; import nndet._C; import nndet"
```
To test the whole installation please run the Toy Dataset example.
</details>
<details close>
<summary>Maximising Training Speed</summary>
<br>
To get the best possible performance we recommend using CUDA 11.0+ with cuDNN 8.1.X+ and a (!)locally compiled version(!) of Pytorch 1.7+
</details>
<details close>
<summary>Docker Container</summary>
<br>
The provided Dockerfile can be used to setup quick development environments or deploy nnDetection.
Please install [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) before continuing.
All projects which are based on nnDetection assume that the base image was build with the following tagging scheme `nnDetection:[version]`.
To build a container (nnDetection Version 0.1) run the following command from the base directory:
```bash
docker build -t nndetection:0.1 .
```
or
```bash
docker build -t nndetection:0.1 --build-arg env_det_num_threads=6 --build-arg env_det_verbose=1 .
```
to overwrite the provided default parameters.
The docker container expects the data and models in `/opt/data` and `/opt/models` respectively.
The directories need to be mounted via docker commands e.g.
```bash
docker run --gpus all nndetection -v /path/to/data/on/pc:/opt/data -v /path/to/models/on/pc:/opt/models -it nndetection:0.1 /bin/bash
```
If nnDetection is already configured on the host PC the following command can be used to start the container with the correct paths.
```bash
docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it nndetection:0.1 /bin/bash
```
After activating the environment via `. /activate` inside the container, training or inference scripts can be executed with the usual commands (see below).
Warning:
1. The current pytorch versions do not support the 3d conv speed up and thus compiling pytorch from source will run faster than this container.
2. When running a training inside the container it is necessary to [increase the shared memory](https://stackoverflow.com/questions/30210362/how-to-increase-the-size-of-the-dev-shm-in-docker-container).
I tested the following configuration on my local workstation:
```bash
docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it nndetection:0.1 --shm-size=24gb /bin/bash
```
</details>
# nnDetection
<div align="center">
<img src=docs/source/nnDetectionFunctional.svg width="600px">
</div>
<details close>
<summary>nnDetection Module Overview</summary>
<br>
<div align="center">
<img src=docs/source/nnDetectionModule.svg width="600px">
</div>
</details>
<details close>
<summary>nnDetection Functional Details</summary>
<br>
<div align="center">
<img src=docs/source/nnDetectionFunctionalDetails.svg width="600px">
</div>
</details>
# Experiments & Data
The datasets used for our experiments are not hosted or maintained by us, please give credit to the authors of the datasets.
Some of the labels were corrected in datasets which we converted and can be downloaded.
The `Reproducing Experiments` section has an overview of multiple guides which explain the preparation of the datasets.
## Toy Dataset
Running `nndet_example` will automatically generate an example dataset with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code.
The problem is very easy and the final results should be near perfect.
After running the generation script follow the `Planning`, `Training` and `Inference` instructions below to construct the whole nnDetection pipeline.
## Reproducing Experiments
<div align="center">
| <!-- --> | <!-- --> | <!-- --> |
|:--------------------------------:|:----------------------:|:----------------------------:|
| [Task 003 Liver](#TODO) | [Task 011 Kits](#TODO) | [Task 020 RibFrac](#TODO) |
| [Task 007 Pancreas](#TODO) | [Task 012 LIDC](#TODO) | [Task 021 ProstateX](#TODO) |
| [Task 008 Hepatic Vessel](#TODO) | [Task 017 CADA](#TODO) | [Task 025 LymphNodes](#TODO) |
| [Task 010 Colon](#TODO) | [Task 019 ADAM](#TODO) | [Task 016 Luna](#TODO) |
</div>
## Adding New Datasets
nnDetection relie on a standardized input format which is very similar to the [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) format and allows easy integration of new datasets.
The format is explained below.
### Folders
All datasets should reside inside `Task[Number]_[Name]` folder inside the specified detection data folder (et the path to this folder with the `det_data` environment flag).
An overview is provided below ([Name] symbolise folder, `-` symbolise files, indents refer to substructures)
```text
${det_data}
[Task000_Example]
- dataset.yaml # dataset.json works too
[raw_splitted_data]
[imagesTr]
- case0000_0000.nii.gz # case0000 modality 0
- case0000_0001.nii.gz # case0000 modality 1
- case0001_0000.nii.gz # case0001 modality 0
- case0000_0001.nii.gz # case0001 modality 1
[labelsTr]
- case0000.nii.gz # instance segmentation case0000
- case0000.json # properties of case0000
- case0001.nii.gz # instance segmentation case0001
- case0001.json # properties of case0001
[imagesTs] # optional, same structure as imagesTr
...
[labelsTs] # optional, same structure as labelsTr
...
[Task001_Example1]
...
```
### Dataset Info
`dataset.yaml` or `dataset.json` provides general information about the dataset:
Note: [Important] Classes and modalities start with index 0!
```yaml
task: Task000D3_Example
name: "Example" # [Optional]
dim: 3 # number of spatial dimensions of the data
target_class: # define class of interest for patient level evaluations # TODO: check if this should be included
test_labels: True # manually splitted test set
labels: # classes of dataset; need to start at 0
"0": "Square"
"1": "SquareHole"
modalities: # modalities of dataset; need to start at 0
"0": "CT"
```
### Image Format
nnDetection uses the same image format as nnU-Net.
Each case consists of at least one 3D nifty file with one modalityand are saved in the `images` folders.
If multiple modalities are available, each modalities uses a separate file and the sequence at the end of the name indicates the modality (corresponds to the number specified in the dataset file).
An example with two modalities could look like this:
```text
- case001_0000.nii.gz # Case ID: case001; Modality: 0
- case001_0001.nii.gz # Case ID: case001; Modality: 1
- case002_0000.nii.gz # Case ID: case002; Modality: 0
- case002_0001.nii.gz # Case ID: case002; Modality: 1
```
If multiple modalities are available, please check beforehand if they need to be registered and perform registration befor nnDetection preprocessing. nnDetection does (!)not(!) include automatic registration of multiple modalities.
### Label Format
Labels are encoded with two files per case: one nifty file which contains the instance segmentation and one json file which includes the "meta" information of each instance.
The nifty file hould contain all annotated instances where each instance has a unique number and are in consecutive order (e.g. 0 ALWAYS refers to background, 1 refers to the first instance, 2 refers to the second instance ...)
`case[XXXX].json` label files need to provide the class of every instance in the segmentation. In this example the first isntance is assigned to class `0` and the second instance is assigned to class `1`:
```json
{
"instances": {
"1": 0,
"2": 1
}
}
```
Each label file needs a corresponding json file to define the classes.
## Using nnDetection
The following paragrah provides an high level overview of the functionality of nnDetection and which commands are available.
A typical flow of commands would look like this:
```text
nndet_prep -> nndet_unpack -> nndet_train -> nndet_consolidate -> nndet_predict
```
Eachs of this commands is explained below and more detailt information can be obtained by running `nndet_[command] -h` in the terminal.
### Planning & Preprocessing
Before training the networks, nnDetection needs to preprocess and analyze the data.
The preprocessing stage noramlizaes and resamples the data while the analyzed properties are used to create a plan which will be used for configuring the training.
nnDetectionV0 requires a GPU with approximately the same amount of VRAM you are planning to use for training (i.e. we used a completely freed RTX2080TI) to perform live estimation of the VRAM used by the network.
Future releases will improve this process...
```bash
nndet_prep [tasks] [-o / --overwrites]
# Example
nndet_prep 000
# Script
# /experiments/preprocess.py - main()
```
`-o` option can be used to overwrite parameters for planning and preprocessing (refer to the onfig files to see all parameters). A typical usecase is to increase or decrease `prep.num_processes` (number of processes used for cropping) and `prep.num_processes_processing` (number of processes used for resampling) depending on the size/number of modalities of the data and available RAM. The current values are fairly save if 64GB of RAM is available.
After planning and preprocessing the resulting data folder structure should look like this:
```text
[Task000_Example]
[raw_splitted]
[raw_cropped] # only needed for different resampling strategies
[imagesTr] # stores cropped image data; contains npz files
[labelsTr] # stores labels
[preprocessed]
[analysis]
[properties] # sufficient for new plans
[labelsTr] # labels in original format (original spacing)
[labelsTs] # optional
[Data identifier; e.g. D3V001_3d]
[imagesTr] # preprocessed data
[labelsTr] # preprocessed labels (resampled spacing)
- {name of plan}.pkl e.g. D3V001_3d.pkl
```
Befor starting the training copy the data (Task Folder, dataset info and preprocessed folder are needed) to a SSD (highly recommended) and unpack the image data with
TODO: update name after reafactoring planner name
```bash
nndet_unpack [path] [num_processes]
# Example (unpack example with 6 processes)
nndet_unpack ${det_data}/Task000D3_Example/preprocessed/D3C002_3d/imagesTr 6
# Script
# /experiments/utils.py - unpack()
```
### Training and Evaluation
After the planning and preprocessing stage is finished the training phase can be started.
The default setup of nnDetection is trained in a 5 fold cross-validation scheme.
First, check which plans were generated during planning by checken the preprocessing folder and looking for the pickled plan files. In most cases only the defaul plan will be generated (`D3V001_3d`) but there might be instances (e.g. Kits) where the low resolution plan will be generated too (`D3V001LR1_3d`).
```bash
nndet_train [task] [-o / --overwrites] [--sweep]
# Example (train default plan D3V001_3d and search best inference parameters)
nndet_train 000 --sweep
# Script
# /experiments/train.py - train()
```
Use `-o exp.fold=X` to overwrite the trained fold, this should be run for all folds `X = 0, 1, 2, 3, 4`!
The `--sweep` option tells nnDetection to look for the best hyparameters for inference by empirically evaluating them on the validation set.
Sweeping can also be performed later by running the following command:
```bash
nndet_sweep [task] [model] [fold]
# Example (sweep Task 000 of model RetinaUNetV001_D3V001_3d in fold 0)
nndet_sweep 000 RetinaUNetV001_D3V001_3d 0
# Script
# /experiments/train.py - sweep()
```
Evaluation can be invoked by the following command (requires access to model and preprocessed data):
```bash
nndet_eval [task] [model] [fold] [--test] [--case] [--boxes] [--seg] [--instances] [--analyze_boxes]
# Example (evaluate and analyze box predictions of default model)
nndet_eval 000 RetinaUNetV001_D3V001_3d 0 --boxes --analyze_boxes
# Script
# /experiments/train.py - evaluate()
# Note: --test invokes evaluation of the test set
# Note: --seg, --instances are placeholders for future versions and not working yet
```
### Inference
After running all fold it is time to collect the models and creat a unified inference plan.
The following command will copy all the models and predictions per fold and by adding the `sweep` options the empiricaly hyperparameter optimization across all fold can be started.
This will generate a unified plan for all models which will be used during inference.
```bash
nndet_consolidate [task] [model] [--overwrites] [--consolidate] [--num_folds] [--no_model] [--sweep_boxes] [--sweep_instances]
# Example
nndet_consolidate 000 RetinaUNetV001_D3V001_3d --sweep_boxes
# Script
# /experiments/consolidate.py - main()
```
Data which is located in `raw_splitted/imagesTs` will be automatically preprocessed and predicted by running the following command:
```bash
nndet_predict [task] [model] [--fold] [--num_models] [--num_tta] [--no_preprocess]
# Example
nndet_predict 000 RetinaUNetV001_D3V001_3d --fold -1
# Script
# /experiments/predict.py - main()
# Note: --num_models is not supported by default
```
If a self-made test set was used, evaluation can be performed by invoking `nndet_eval` as described above.
## nnU-Net for Detection
TODO
## Pretrained models
TODO
# FAQ
<details close>
<summary>GPU requirements</summary>
<br>
nnDetection v0.1 was developed for GPUs with at least 11GB of VRAM (e.g. RTX2080TI, TITAN RTX).
All of our experiments were conducted with a RTX2080TI.
While the memory can be adjusted by manipulating the correct setting we recommend using the default values for now.
Future releases will refactor the planning stage to improve the VRAM estimation and add support for different memory budgets.
</details>
<details close>
<summary>Error: Undefined CUDA symbols when importing `nndet._C`</summary>
<br>
Please double check CUDA version of your PC, pytorch, torchvision and nnDetection build!
Follow the installation instruction at the beginning!
</details>
<details close>
<summary>Error: No kernel image is available for execution"</summary>
<br>
You are probably executing the build on a machine with a GPU architecture which was not present/set during the build.
Please check [link](https://developer.nvidia.com/cuda-gpus) to find the correct SM architecture and set `TORCH_CUDA_ARCH_LIST`
approriately (e.g. check Dockefile for example).
Make sure to delete all caches before rebulding!
</details>
<details close>
<summary>Training with bounding boxes</summary>
<br>
The first release of nnDetection focuses on 3d medical images and Retina U-Net.
As a consequence training (specifically planning and augmentation) requrie segmentation annotations.
In many cases this limitation can be circumvented by converting the bounding boxes into segmentations.
</details>
<details close>
<summary>Mask RCNN and 2D Datasets</summary>
<br>
2D datasets and Mask R-CNN are not supported in the first release.
We hope to provide these sometime in the future.
</details>
<details close>
<summary>Multi GPU Training</summary>
<br>
Multi GPU training is not officially supported yet.
Inference and the metric computation are not properly designed to support these usecases!
</details>
<details close>
<summary>Prebuild package</summary>
<br>
We are planning to provide prebuild wheels in the future but no prebuild wheels are available right now.
Please use the provided Dockerfile or the installation instructions to run nnDetection.
</details>
# Cite
If you use nnDetection for your project/research/work please cite the following paper:
```text
TODO
```
# Acknowledgements
TODO
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
<?xml version="1.0" encoding="UTF-8"?>
<svg width="1198px" height="948px" viewBox="0 0 1198 948" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title>Group 33</title>
<defs>
<path d="M0,0 L1198,0 L1198,47 L0,47 L0,0 Z" id="path-1"></path>
<mask id="mask-2" maskContentUnits="userSpaceOnUse" maskUnits="objectBoundingBox" x="0" y="0" width="1198" height="47" fill="white">
<use xlink:href="#path-1"></use>
</mask>
</defs>
<g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="Group-27">
<g id="Group-19">
<g id="Group-15" transform="translate(0.000000, 65.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="102"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#BEEEB8" x="1.5" y="1.5" width="184" height="102"></rect>
<text id="Resampling-Strategy" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="44.4775391" y="18">Resampling</tspan>
<tspan x="58.4829102" y="40">Strategy</tspan>
</text>
<text id="Image:-We-use-the-sa" font-family="Helvetica-Bold, Helvetica" font-size="18" font-weight="bold" fill="#000000">
<tspan x="196" y="27">Image</tspan>
<tspan x="248.022461" y="27" font-family="Helvetica" font-weight="normal">: We use the same image resampling </tspan>
<tspan x="196" y="49" font-family="Helvetica" font-weight="normal">procedure as nnU-Net</tspan>
<tspan x="196" y="71">Annotation</tspan>
<tspan x="290.974609" y="71" font-family="Helvetica" font-weight="normal">: Annotations are resampled with </tspan>
<tspan x="196" y="93" font-family="Helvetica" font-weight="normal">nearest neighbor</tspan>
</text>
</g>
<g id="Group-15">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="65"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#FFFFFF" x="1.5" y="1.5" width="184" height="65"></rect>
<text id="Parameter" font-family="Helvetica-Bold, Helvetica" font-size="18" font-weight="bold" fill="#000000">
<tspan x="49.9711914" y="18">Parameter</tspan>
</text>
<text id="Description" font-family="Helvetica-Bold, Helvetica" font-size="18" font-weight="bold" fill="#000000">
<tspan x="340.491211" y="18">Description</tspan>
</text>
</g>
<g id="Group-15" transform="translate(0.000000, 167.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="214"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#BEEEB8" x="1.5" y="1.5" width="184" height="214"></rect>
<text id="Network-Topology-&amp;-F" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="12.1206055" y="18">Network Topology &amp;</tspan>
<tspan x="36.9804688" y="40">FPN Levels &amp;</tspan>
<tspan x="48.9775391" y="62">Patch Size</tspan>
</text>
<text id="The-anisotric-axis-o" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="27">The anisotric axis of the patch size is initialized </tspan>
<tspan x="195" y="49">with the median shape of the anisotropic axis of </tspan>
<tspan x="195" y="71">the dataset. The isotropic axes are initialized </tspan>
<tspan x="195" y="93">with the minimum size of the isotropic axes of </tspan>
<tspan x="195" y="115">the dataset. </tspan>
<tspan x="195" y="137">The patch size is decreased while adapting the </tspan>
<tspan x="195" y="159">network architecture and feature pyramid </tspan>
<tspan x="195" y="181">network levels until the memory constrains are </tspan>
<tspan x="195" y="203">fulfilled. The batch size is fixed to four.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(0.000000, 381.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="121"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#BEEEB8" x="1.5" y="1.5" width="184" height="121"></rect>
<text id="Anchor-Optimization" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="63.4838867" y="18">Anchor</tspan>
<tspan x="41.9814453" y="40">Optimization</tspan>
</text>
<text id="The-anchor-sizes-are" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="25">The anchor sizes are determined by maximising </tspan>
<tspan x="195" y="47">the IoU of the best fitting anchor on the given </tspan>
<tspan x="195" y="69">object sizes extracted from the training set. </tspan>
<tspan x="195" y="91">Optimization of three anchor sizes per axis is </tspan>
<tspan x="195" y="113">performed via differential evolution.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(0.000000, 502.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="168"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#B8EEDC" x="1.5" y="1.5" width="184" height="168"></rect>
<text id="Low-Resolution-Model" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="30.4633789" y="18">Low Resolution</tspan>
<tspan x="67.4873047" y="40">Model</tspan>
</text>
<text id="The-low-resolution-c" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="24">The low resolution configuration will be triggered </tspan>
<tspan x="195" y="46">if the 99.5 percentile of object sizes along any</tspan>
<tspan x="195" y="68">axes exceeds the patch size of the full </tspan>
<tspan x="195" y="90">resolution model. If the low resolution </tspan>
<tspan x="195" y="112">configuration is triggered, the target spacing </tspan>
<tspan x="195" y="134">along each axes will be increased by two to </tspan>
<tspan x="195" y="156">incorporate more contextual information.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(605.000000, 263.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="275"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#B8C6EE" x="1.5" y="1.5" width="184" height="275"></rect>
<text id="Optimizer-&amp;-Learning" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="44.9916992" y="18">Optimizer &amp;</tspan>
<tspan x="35.4599609" y="40">Learning Rate</tspan>
</text>
<text id="All-configurations-a" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="24">All configurations are trained for 60 epochs with </tspan>
<tspan x="195" y="46">2500 mini batches per epoch and half of the </tspan>
<tspan x="195" y="68">batch is forced to contain at least one object. </tspan>
<tspan x="195" y="90">SGD with Nesterov momentum 0.9 is used.</tspan>
<tspan x="195" y="112">At the beginning of the training the learning rate </tspan>
<tspan x="195" y="134">is linearly ramped up from 1e-6 to 1e-2 over the </tspan>
<tspan x="195" y="156">first 4000 iterations. Poly learning rate schedule </tspan>
<tspan x="195" y="178">is used until epoch 50. The last 10 epochs are </tspan>
<tspan x="195" y="200">trained with a cyclic learning rate fluctuating </tspan>
<tspan x="195" y="222">between 1e-3 and 1e-6 during every epoch.</tspan>
<tspan x="195" y="244">We snapshot the model weights after each </tspan>
<tspan x="195" y="266">epoch for Stochastic Weight Averaging.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(0.000000, 671.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="121"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#B8C6EE" x="1.5" y="1.5" width="184" height="121"></rect>
<text id="Architecture-Templat" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="43.980957" y="18">Architecture</tspan>
<tspan x="55.4814453" y="40">Template</tspan>
</text>
<text id="Retina-U-Net-with-an" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="25">Retina U-Net with an encoder which consists of </tspan>
<tspan x="195" y="47">plain convolutions, ReLU and instance </tspan>
<tspan x="195" y="69">normalization blocks. The detection heads used </tspan>
<tspan x="195" y="91">for anchor classification and regression consist </tspan>
<tspan x="195" y="113">of three convolutions with group norm.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(605.000000, 0.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="263"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#B8C6EE" x="1.5" y="1.5" width="184" height="263"></rect>
<text id="Loss-Functions" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="31.4697266" y="18">Loss Functions</tspan>
</text>
<text id="Detection-Branch:-To" font-family="Helvetica-Bold, Helvetica" font-size="18" font-weight="bold" fill="#000000">
<tspan x="195" y="30">Detection Branch</tspan>
<tspan x="344.027344" y="30" font-family="Helvetica" font-weight="normal">: To balance positive and </tspan>
<tspan x="195" y="52" font-family="Helvetica" font-weight="normal">negative anchors, hard negative mining is used </tspan>
<tspan x="195" y="74" font-family="Helvetica" font-weight="normal">while selecting 1/3 positive and 2/3 negative </tspan>
<tspan x="195" y="96" font-family="Helvetica" font-weight="normal">anchors. The classification branch is trained with </tspan>
<tspan x="195" y="118" font-family="Helvetica" font-weight="normal">the Binary Cross-Entropy loss and the </tspan>
<tspan x="195" y="140" font-family="Helvetica" font-weight="normal">Generalized IoU Loss is used for anchor </tspan>
<tspan x="195" y="162" font-family="Helvetica" font-weight="normal">regression.</tspan>
<tspan x="195" y="184">Segmentation Branch</tspan>
<tspan x="381.029297" y="184" font-family="Helvetica" font-weight="normal">: The segmentation </tspan>
<tspan x="195" y="206" font-family="Helvetica" font-weight="normal">branch is trained with the Dice and Cross-</tspan>
<tspan x="195" y="228" font-family="Helvetica" font-weight="normal">Entropy loss to distinguish foreground and </tspan>
<tspan x="195" y="250" font-family="Helvetica" font-weight="normal">background pixels.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(605.000000, 538.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="72"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#B8C6EE" x="1.5" y="1.5" width="184" height="72"></rect>
<text id="Data-Augmentation" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="15.4428711" y="18">Data Augmentation</tspan>
</text>
<text id="We-use-the-same-augm" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="22">We use the same augmentation strategy as </tspan>
<tspan x="195" y="44">nnU-Net without simulating low resolution </tspan>
<tspan x="195" y="66">samples.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(0.000000, 792.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="99"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill="#B8C6EE" x="1.5" y="1.5" width="184" height="99"></rect>
<text id="Anchor-Matching" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="24.4648438" y="18">Anchor Matching</tspan>
</text>
<text id="Adaptive-Training-Sa" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="24">Adaptive Training Sample Selection (ATSS) is </tspan>
<tspan x="195" y="46">used to match anchors and ground truth boxes. </tspan>
<tspan x="195" y="68">The center of the anchor boxes do not need to </tspan>
<tspan x="195" y="90">lie within the ground truth box.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(605.000000, 610.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="187"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill-opacity="0.45" fill="#EE7400" x="1.5" y="1.5" width="184" height="187"></rect>
<text id="Empirical-Parameter" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="10.4770508" y="18">Empirical Parameter</tspan>
<tspan x="41.9814453" y="40">Optimization</tspan>
</text>
<text id="Parameters-which-are" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="23">Parameters which are only required during the </tspan>
<tspan x="195" y="45">inference procedure are empirically optimized </tspan>
<tspan x="195" y="67">by evaluating the performance on the validation </tspan>
<tspan x="195" y="89">set. This includes: the IoU threshold required for </tspan>
<tspan x="195" y="111">the NMS of the model, the IoU threshold </tspan>
<tspan x="195" y="133">required to perform WBC, a minimum probability </tspan>
<tspan x="195" y="155">for predictions of the model, a minimum object </tspan>
<tspan x="195" y="177">size.</tspan>
</text>
</g>
<g id="Group-15" transform="translate(605.000000, 797.000000)">
<rect id="Rectangle" stroke="#979797" stroke-width="3" x="185.5" y="1.5" width="406" height="94"></rect>
<rect id="Rectangle" stroke="#979797" stroke-width="3" fill-opacity="0.45" fill="#EE7400" x="1.5" y="1.5" width="184" height="94"></rect>
<text id="Model-Selection" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="67.4873047" y="18">Model</tspan>
<tspan x="54.9760742" y="40">Selection</tspan>
</text>
<text id="If-the-low-resolutio" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="195" y="30">If the low resolution model was triggered, only </tspan>
<tspan x="195" y="52">the best model as determined by the five fold </tspan>
<tspan x="195" y="74">cross-validation will be used for the test set.</tspan>
</text>
</g>
</g>
<g id="Group-25" transform="translate(0.000000, 901.000000)">
<use id="Rectangle-2" stroke="#585757" mask="url(#mask-2)" stroke-width="8" stroke-dasharray="3" xlink:href="#path-1"></use>
<g id="Group-23" transform="translate(110.000000, 13.000000)" fill="#000000">
<path id="Line-8" d="M57,4 L71,11 L57,18 L57,12 L-1,12 L-1,10 L57,10 L57,4 Z" fill-rule="nonzero"></path>
<text id="Symbolizes-a-depende" font-family="Helvetica" font-size="18" font-weight="normal">
<tspan x="82" y="18">Symbolizes a dependency</tspan>
</text>
</g>
<g id="Group-22" transform="translate(720.000000, 13.000000)" fill="#000000">
<path id="Line-10" d="M59.3686857,3.88816322 L60.2402612,4.37842446 L72.2402612,11.1284245 L73.7897289,12 L72.2402612,12.8715755 L60.2402612,19.6215755 L59.3686857,20.1118368 L58.3881632,18.3686857 L59.2597388,17.8784245 L69.71,12 L59.2597388,6.12157554 L58.3881632,5.6313143 L59.3686857,3.88816322 Z M8,11 L8,13 L-1,13 L-1,11 L8,11 Z M22,11 L22,13 L13,13 L13,11 L22,11 Z M36,11 L36,13 L27,13 L27,11 L36,11 Z M50,11 L50,13 L41,13 L41,11 L50,11 Z M64,11 L64,13 L55,13 L55,11 L64,11 Z" fill-rule="nonzero"></path>
<text id="Denotes-sequential-p" font-family="Helvetica" font-size="18" font-weight="normal">
<tspan x="85" y="18">Denotes sequential procedures</tspan>
</text>
</g>
</g>
</g>
</g>
</svg>
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<svg width="1186px" height="312px" viewBox="0 0 1186 312" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title>Group 31</title>
<defs>
<polygon id="path-1" points="0 0 526 0 526 312 259.271786 312 0 312"></polygon>
<mask id="mask-2" maskContentUnits="userSpaceOnUse" maskUnits="objectBoundingBox" x="0" y="0" width="526" height="312" fill="white">
<use xlink:href="#path-1"></use>
</mask>
<rect id="path-3" x="25" y="13" width="238" height="284"></rect>
<mask id="mask-4" maskContentUnits="userSpaceOnUse" maskUnits="objectBoundingBox" x="0" y="0" width="238" height="284" fill="white">
<use xlink:href="#path-3"></use>
</mask>
<polygon id="path-5" points="0 1.08246745e-15 442 1.08246745e-15 648 0 648 147 442 147 442 312 0 312"></polygon>
<mask id="mask-6" maskContentUnits="userSpaceOnUse" maskUnits="objectBoundingBox" x="0" y="0" width="648" height="312" fill="white">
<use xlink:href="#path-5"></use>
</mask>
<rect id="path-7" x="455" y="156" width="193" height="156"></rect>
<mask id="mask-8" maskContentUnits="userSpaceOnUse" maskUnits="objectBoundingBox" x="0" y="0" width="193" height="156" fill="white">
<use xlink:href="#path-7"></use>
</mask>
</defs>
<g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="Group-31">
<g id="Group-29">
<g id="Group-9" transform="translate(38.000000, 25.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#D0B8EE" x="1" y="1" width="209" height="60" rx="7"></rect>
<text id="planning.properties." font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="25.9467773" y="27">planning.properties.</tspan>
<tspan x="71.4785156" y="49">instance</tspan>
</text>
</g>
<g id="Group-9" transform="translate(38.000000, 125.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#D0B8EE" x="1" y="1" width="209" height="60" rx="7"></rect>
<text id="planning.properties." font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="25.9467773" y="27">planning.properties.</tspan>
<tspan x="71.9838867" y="49">intensity</tspan>
</text>
</g>
<g id="Group-9" transform="translate(36.000000, 225.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#D0B8EE" x="1" y="1" width="209" height="60" rx="7"></rect>
<text id="planning.properties." font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="25.9467773" y="27">planning.properties.</tspan>
<tspan x="73.987793" y="49">medical</tspan>
</text>
</g>
<g id="Group-9" transform="translate(282.000000, 13.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#BEEEB8" x="1" y="1" width="221" height="60" rx="7"></rect>
<text id="planning.-plan_exper" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="74.9682617" y="27">planning.</tspan>
<tspan x="44.9580078" y="49">plan_experiment</tspan>
</text>
</g>
<g id="Group-9" transform="translate(282.000000, 87.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#BEEEB8" x="1" y="1" width="221" height="60" rx="7"></rect>
<text id="planning.-plan_archi" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="74.9682617" y="27">planning.</tspan>
<tspan x="42.4575195" y="49">plan_architecture</tspan>
</text>
</g>
<g id="Group-9" transform="translate(282.000000, 161.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#BEEEB8" x="1" y="1" width="221" height="60" rx="7"></rect>
<text id="planning.-estimator" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="74.9682617" y="27">planning.</tspan>
<tspan x="74.4892578" y="49">estimator</tspan>
</text>
</g>
<g id="Group-9" transform="translate(282.000000, 235.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#BEEEB8" x="1" y="1" width="221" height="60" rx="7"></rect>
<text id="preprocessing" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="54.96875" y="38">preprocessing</tspan>
</text>
</g>
<use id="Rectangle" stroke="#64BF58" mask="url(#mask-2)" stroke-width="8" stroke-dasharray="2" xlink:href="#path-1"></use>
<use id="Rectangle" stroke="#A261F4" mask="url(#mask-4)" stroke-width="6" stroke-dasharray="3" xlink:href="#path-3"></use>
</g>
<g id="Group-30" transform="translate(538.000000, 0.000000)">
<g id="Group-9" transform="translate(226.000000, 161.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="io.datamodule.-bg_lo" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="30.4594727" y="27">io.datamodule.</tspan>
<tspan x="49.4658203" y="49">bg_loader</tspan>
</text>
</g>
<g id="Group-9" transform="translate(226.000000, 235.000000)" fill="#E0E0E0" stroke="#000000" stroke-width="2">
<rect id="Rectangle" x="1" y="1" width="177" height="60" rx="7"></rect>
</g>
<g id="Group-9" transform="translate(454.000000, 8.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#E0E0E0" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="evaluator" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="52.4760742" y="38">evaluator</tspan>
</text>
</g>
<g id="Group-9" transform="translate(454.000000, 75.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#E0E0E0" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="utils" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="73.4951172" y="38">utils</tspan>
</text>
</g>
<g id="Group-9" transform="translate(12.000000, 13.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#E0E0E0" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="ptmodule" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="52.4760742" y="38">ptmodule</tspan>
</text>
</g>
<g id="Group-9" transform="translate(226.000000, 87.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="io.-augmentation" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="79.9946289" y="27">io.</tspan>
<tspan x="34.9594727" y="49">augmentation</tspan>
</text>
</g>
<g id="Group-9" transform="translate(226.000000, 13.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="training" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="59.9819336" y="38">training</tspan>
</text>
</g>
<g id="Group-9" transform="translate(12.000000, 161.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="models" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="60.4873047" y="38">models</tspan>
</text>
</g>
<g id="Group-9" transform="translate(462.000000, 169.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#F7C08C" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="inferene" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="56.9760742" y="38">inferene</tspan>
</text>
</g>
<g id="Group-9" transform="translate(462.000000, 241.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#F7C08C" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="inferene.-sweeper" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="54.4755859" y="27">inferene.</tspan>
<tspan x="55.4819336" y="49">sweeper</tspan>
</text>
</g>
<g id="Group-9" transform="translate(12.000000, 87.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="detection" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="52.9726562" y="38">detection</tspan>
</text>
</g>
<g id="Group-9" transform="translate(12.000000, 235.000000)">
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<rect id="Rectangle" stroke="#000000" stroke-width="2" fill="#B8C6EE" x="1" y="1" width="177" height="60" rx="7"></rect>
<text id="losses" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="63.9897461" y="38">losses</tspan>
</text>
</g>
<use id="Rectangle" stroke="#537EF9" mask="url(#mask-6)" stroke-width="8" stroke-dasharray="2" xlink:href="#path-5"></use>
<use id="Rectangle" stroke="#EE7400" mask="url(#mask-8)" stroke-width="6" stroke-dasharray="3" xlink:href="#path-7"></use>
<text id="io.datamodule.-bg_mo" font-family="Helvetica" font-size="18" font-weight="normal" fill="#000000">
<tspan x="256.459473" y="262">io.datamodule.</tspan>
<tspan x="270.96582" y="284">bg_module</tspan>
</text>
</g>
</g>
</g>
</svg>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment