How to run pytorch with NVIDIA “cuda toolkit” version instead of the official conda “cudatoolkit” version?

You can try to install PyTorch via Pip:

pip install torch torchvision

It is also official way of installing, available in “command helper” at pytorch.org/get-started/locally/.

It uses preinstalled CUDA and doesn’t download own CUDA Toolkit.
Also you can choose the version of CUDA to install PyTorch for:

pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

I imagine it is probably possible to get a conda-installed pytorch to use a non-conda-installed CUDA toolkit. I don’t know how to do it, and in my experience, when using conda packages that depend on CUDA, its much easier just to provide a conda-installed CUDA toolkit, and let it use that, rather than anything else. This often means I have one CUDA toolkit installed inside conda, and one installed in the usual location.

However, regardless of how you install pytorch, if you install a binary package (e.g. via conda), that version of pytorch will depend on a specific version of CUDA (that it was compiled against, e.g. 10.2) and you cannot use any other version of CUDA, regardless of how or where it is installed, to satisfy that dependency.

Read more here: Source link