Low performance and Thread-MPI error in multi GPU usage – User discussions

GROMACS version: 2023.01
GROMACS modification: No

I installed GROMACS 2023.01 using the cmake command below.

cmake … -DGMX_USE_RDTSCP=ON -DGMX_SIMD=AVX2_256 -DGMX_BUILD_MDRUN_ONLY=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_BUILD_OWN_FFTW=ON -DGMX_MPI=ON -DGMX_THREAD_MPI=ON -DGMX_GPU=CUDA -DCMAKE_C_COMPILER=gcc-9 -DCMAKE_CXX_COMPILER=g+±9

But, When I execute mdrun like below,
I got the error, THREAD_MPI is not complied during installation.
mpirun -np 2 gmx mdrun -v -deffnm ${mini_prefix} -ntmpi 2 -npme 1 -gputasks 01

When I use another command to execute mdrun, the process run normally but the performance is more slower than when I use one GPU (77 ns/day → 32 ns/day).
mpirun -np 2 gmx mdrun -v -deffnm ${mini_prefix} -gpu_id 01

I want to execute mdrun with -ntmpi, -npme setting to improve multi-GPU performance, but I can’t do it due to THREAD_MPI error.
Is there any issue for my cmake build command?

Read more here: Source link