"...git@developer.sourcefind.cn:modelzoo/solov2-pytorch.git" did not exist on "527629fe1d03279af0d0afbd94b38a5fca8812d0"
Enable ONNX/ONNXRuntime optimizations through converter script (#6131)
* Add onnxruntime transformers optimization support Signed-off-by:Morgan Funtowicz <morgan@huggingface.co> * Added Optimization section in ONNX/ONNXRuntime documentation. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Improve note reference Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fixing imports order. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Add warning about different level of optimization between torch and tf export. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Address @LysandreJik wording suggestion Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Address @LysandreJik wording suggestion Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Always optimize model before quantization for maximum performances. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Address comments on the documentation. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Improve TensorFlow optimization message as suggested by @yufenglee Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Removed --optimize parameter Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Warn the user about current quantization limitation when model is larger than 2GB. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Trigger CI for last check * Small change in print for the optimization section. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
Showing
Please register or sign in to comment