python – PyTorch working in Miniconda but “not compiled with CUDA enabled” in PyCharm

I have had quite the journey trying to get PyCharm to use my GPU (NVIDIA GeForce GTX 1080 ti) when running code from this github: github.com/gordicaleksa/pytorch-neural-style-transfer

After a whole lot of back and forth setting up CUDA, cuDNN etc., I have finally got PyTorch working (pretty sure) in my Miniconda Prompt, but PyTorch is not recognizing or using my GPU when I run the code in PyCharm.

For reference, here’s what I have downloaded:
Miniconda3
CUDA Toolkit 10.1
cuDNN v7.6.5 (November 5th, 2019) for CUDA 10.1
PyTorch v 1.4.0 (installed inside of Miniconda)
as well as some C++ from Visual Studio, latest NVIDIA driver and GeForce Experience, plus added some Path environment variables to the CUDA bin and lib folders.

As I said, PyTorch seems to be working fine in my Miniconda prompt:

(base) C:UsersRiley>python
Python 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.get_device_name()
'NVIDIA GeForce GTX 1080 Ti'
>>> torch.cuda.is_available()
True

Then, following this video, I activate the pytorch-nst environment inside of conda:

(base) C:UsersRiley>cd C:Git_tempNST_1_Optimizationpytorch-neural-style-transfer

(base) C:Git_tempNST_1_Optimizationpytorch-neural-style-transfer>conda env list
# conda environments:
#
base                  *  C:UsersRileyminiconda3
pytorch-nst              C:UsersRileyminiconda3envspytorch-nst


(base) C:Git_tempNST_1_Optimizationpytorch-neural-style-transfer>conda activate pytorch-nst

(pytorch-nst) C:Git_tempNST_1_Optimizationpytorch-neural-style-transfer>

Next, I open up the pytorch-neural-style-transfer project in PyCharm and configure (I think?) my project interpreter to use the existing conda environment with the following interpreter and conda executable file/paths:
enter image description here

PyCharm will run reconstruct_image_from_representation.py but it’s not using my GPU. And here is what I get in my Python console when I try to call some Torch functions:

torch.cuda.get_device_name()
Traceback (most recent call last):
  File "C:UsersRileyminiconda3envspytorch-nstlibcode.py", line 90, in runcode
    exec(code, self.locals)
  File "<input>", line 1, in <module>
  File "C:UsersRileyminiconda3envspytorch-nstlibsite-packagestorchcuda__init__.py", line 304, in get_device_name
    return get_device_properties(device).name
  File "C:UsersRileyminiconda3envspytorch-nstlibsite-packagestorchcuda__init__.py", line 325, in get_device_properties
    _lazy_init()  # will define _get_device_properties and _CudaDeviceProperties
  File "C:UsersRileyminiconda3envspytorch-nstlibsite-packagestorchcuda__init__.py", line 196, in _lazy_init
    _check_driver()
  File "C:UsersRileyminiconda3envspytorch-nstlibsite-packagestorchcuda__init__.py", line 94, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Thank you so much for helping me out!

Read more here: Source link