@@ -293,6 +293,7 @@ Below you can find inference durations in milliseconds for each model with and w
...
@@ -293,6 +293,7 @@ Below you can find inference durations in milliseconds for each model with and w
We also benchmarked on PyTorch nightly (2.1.0dev, find the wheel [here](https://download.pytorch.org/whl/nightly/cu118)) and observed improvement in latency both for uncompiled and compiled models.
We also benchmarked on PyTorch nightly (2.1.0dev, find the wheel [here](https://download.pytorch.org/whl/nightly/cu118)) and observed improvement in latency both for uncompiled and compiled models.