Where can you set the Pytorch model function called by Triton for a Deepstream app? – DeepStream SDK

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Where can I set or find the function that deepstream calls in a Pytorch model converted to TensorRT model?


deepstream nvinfer will convert model to TRT model before inference, but nvinfer dose not support Pytorch model directly, please refer to inputs-and-outputs, you might convert Pytorch model to onnx model first.

Read more here: Source link