-
Benjamin Lefaudeux authored
* Aligning the optimizer state dict with what PyTorch expects * Adding a check on the dict keys, ensure that `state` and `param_groups` are there * after installing the specific isort, black and all, one liner to please the linter.. * Adding some measurement of the memory consumption while training + checkpointing * mandatory lintfix commit * brainfart, reset the memory use counter at the beginning of the training in case two of them are run in a row * move reset stats call, hotfix * move the optimizer to rmsprop, more stateful and still used in CV * trying to figure out a sigsev in circleci
ee38e1e0