super resolution with onnxruntime
Author:

Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime .. note:: As of PyTorch 2.1, there are two versions of ONNX Exporter. * ``torch.onnx.dynamo_export`is the newest (still in beta) exporter based on the TorchDynamo technology released with PyTorch 2.0. * ``torch.onnx.export`` is based on TorchScript backend and has been available since PyTorch 1.2.0. In this tutorial, we describe how to convert a model defined in PyTorch into the ONNX format using the TorchScript ``torch.onnx.export` ONNX exporter. The exported model will be executed with ONNX Runtime. ONNX Runtime is a performance-focused engine for ONNX models, which inferences efficiently across multiple platforms and hardware (Windows, Linux, and Mac and on both CPUs and GPUs). ONNX Runtime has proved to considerably increase performance over multiple models as explained `here <https://cloudblogs.microsoft.com/opensource/2019/05/22/onnx-runtime-machine-learning-inferencing-0-4-release>`__ For this tutorial, you will need to install `ONNX <https://github.com/onnx/onnx>`__ and `ONNX Runtime <https://github.com/microsoft/onnxruntime>`__. You can get binary builds of ONNX and ONNX Runtime with
Tasks: Deep Learning Fundamentals, Image-to-Image
Task Categories: Deep Learning Fundamentals, Computer Vision
Published: 10/08/23
Tags
onnx
onnxruntime
pytorch
inference
Loading...