Unverified Commit ace61a41 authored by msbaines's avatar msbaines Committed by GitHub
Browse files

[chore] update to torch v1.7.0 (#171)

parent ea9876e3
......@@ -38,12 +38,12 @@ setup_venv: &setup_venv
which pip
pip install --upgrade pip
install_dep_15: &install_dep_15
install_dep_17: &install_dep_17
- run:
name: Install Dependencies
command: |
sudo apt-get install -y libopenmpi-dev
pip install --progress-bar off torch==1.5.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install --progress-bar off torch==1.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install --progress-bar off -r requirements-test.txt
python -c 'import torch; print("Torch version:", torch.__version__)'
python -m torch.utils.collect_env
......@@ -108,7 +108,7 @@ run_oss_benchmark: &run_oss_benchmark
- run:
name: Run OSS Benchmark
command: |
python benchmarks/oss.py --check_regression --world_size 4 --reference_speed 780 --reference_memory 1120 --reference_loss 0.049
python benchmarks/oss.py --check_regression --world_size 4 --reference_speed 760 --reference_memory 1120 --reference_loss 0.023
run_oss_gloo: &run_oss_gloo
- run:
......@@ -167,7 +167,7 @@ jobs:
- store_test_results:
path: test-results
gpu_tests_15:
gpu_tests_17:
<<: *gpu
working_directory: ~/fairscale
......@@ -184,14 +184,14 @@ jobs:
# Cache the venv directory that contains dependencies
- restore_cache:
keys:
- cache-key-gpu15-{{ checksum "setup.py"}}-{{ checksum "requirements-test.txt"}}
- cache-key-gpu17-{{ checksum "setup.py"}}-{{ checksum "requirements-test.txt"}}
- <<: *install_dep_15
- <<: *install_dep_17
- save_cache:
paths:
- ~/venv
key: cache-key-gpu15-{{ checksum "setup.py"}}-{{ checksum "requirements-test.txt"}}
key: cache-key-gpu17-{{ checksum "setup.py"}}-{{ checksum "requirements-test.txt"}}
- <<: *install_repo_gpu
......@@ -258,7 +258,7 @@ jobs:
keys:
- cache-key-benchmarks-{{ checksum "setup.py"}}-{{ checksum "requirements-test.txt"}}
- <<: *install_dep_16
- <<: *install_dep_17
- save_cache:
paths:
......@@ -281,6 +281,6 @@ workflows:
build:
jobs:
- cpu_tests
- gpu_tests_15
- gpu_tests_17
- gpu_tests_16
- benchmarks
......@@ -16,7 +16,7 @@ fairscale supports:
## Requirements
* PyTorch >= 1.4
* PyTorch >= 1.5.1
## Installation
......@@ -106,7 +106,7 @@ if __name__ == "__main__":
# Testing
We use circleci to test on PyTorch versions 1.5.1 and 1.6.0 and CUDA version 10.1. Please create an [issue](https://github.com/facebookresearch/fairscale/issues) if you are having trouble with installation.
We use circleci to test on PyTorch versions 1.6.0 and 1.7.0 and CUDA version 10.1. Please create an [issue](https://github.com/facebookresearch/fairscale/issues) if you are having trouble with installation.
## Contributors
......
......@@ -7,7 +7,7 @@ pytest == 5.4.1
pytest-cov == 2.10.0
pytest-mpi == 0.4
torchtext == 0.6.0
torch >= 1.5.1
torch >= 1.6.0
torchvision >= 0.6.0
# NOTE(msb) not a dependency but needed by torch
numpy == 1.17.4
......@@ -195,8 +195,9 @@ def test_conv_bn():
def test_input_requiring_grad():
dbn = DeferredBatchNorm(3, chunks=CHUNKS)
input = torch.rand(16, 3, 224, 224, requires_grad=True)
input = torch.rand(16, 3, 224, 224)
input = tilt_dist(input)
input.requires_grad = True
chunked_forward(dbn, input)
......
......@@ -635,7 +635,7 @@ def deferred_batch_norm_params(checkpoint, lazy):
assert torch.allclose(pipe[0].bias.grad, bn.bias.grad, atol=1e-4)
@torch_spawn([5])
@torch_spawn([4])
def devices():
a = nn.Linear(1, 1)
b = nn.Linear(1, 1)
......@@ -646,7 +646,7 @@ def devices():
model = Pipe(model, [1, 1, 1], style=Pipe.MultiProcess, worker_map=get_worker_map())
# Extra devices must be discarded.
if model.group.rank() in [3, 4]:
if model.group.rank() == 3:
assert model.pipeline is None
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment