Tag: OpenMP

Scaling of MD with domain decomposition on JUWELS Cluster – Developers discussions

GROMACS version: 2024-dev-20240201-787d96c7a9-unknownGROMACS modification: YesI’m conducting some performance tests on the JUWELS Cluster, trying to see the improvement in performance that DD can bring within a single node. GROMACS build infoHere, GROMACS was built using GCC v11.4.0, uses OpenMP and OpenMPI v4.1.5, and builds its own FFTW v3.3.8. JUWELS Cluster…

Continue Reading Scaling of MD with domain decomposition on JUWELS Cluster – Developers discussions

[llvm-bugs] [Bug 75428] clang crash when build IPEX code.

Issue 75428 Summary clang crash when build IPEX code. Labels clang Assignees Reporter xuhancn The error msg: “`cmd Stack dump: 0. Program arguments: /usr/bin/clang++ -DAT_PARALLEL_OPENMP=1 -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -Dintel_ext_pt_cpu_EXPORTS -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/include -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/cpu -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/cpu/aten -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/cpu/utils -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/cpu/jit -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/jit -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/utils -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/third_party/ideep/mkl-dnn/include -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/csrc/cpu/tpp -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/third_party/libxsmm/include -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/build/Release/csrc/cpu/csrc/cpu/cpu_third_party/ideep/mkl-dnn/include -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/third_party/ideep/include -I/home/xu/anaconda3/envs/ipex_cpu/include/python3.12 -I/home/xu/anaconda3/envs/ipex_cpu/include -I/home/xu/anaconda3/envs/ipex_cpu/lib/python3.12/site-packages/torch/include/torch/csrc/api/include -I/home/xu/conda_spaces/ipex_cpu/frameworks.ai.pytorch.ipex-cpu/third_party/ideep/mkl-dnn/src/../include -isystem /home/xu/anaconda3/envs/ipex_cpu/lib/python3.12/site-packages/torch/include -fPIC…

Continue Reading [llvm-bugs] [Bug 75428] clang crash when build IPEX code.

Specifying time length of protein simulation – User discussions

cjbraz December 13, 2023, 7:56pm 1 GROMACS version: 2023.2GROMACS modification: Yes (GMX_OPENMP_MAX_THREADS = 128)Here post your question Hello, the below linked post is a little older version than my version of GROMACS but I would like to get an answer to this post’s question. Running the linked production script only…

Continue Reading Specifying time length of protein simulation – User discussions

Mpi_run aborts with abort code 1 – LAMMPS General Discussion

Hello lammps users,I am new to lammps. I have two questions. I have a system with around 40000 atoms. Will it take significantly less time if I run my simulation using mpi_run instead of serial_run? As I am running using the command “mpiexec -np 18 lmp -in NaCl.in” , I…

Continue Reading Mpi_run aborts with abort code 1 – LAMMPS General Discussion

Gromacs 2023.3 on Apple M3 chip – User discussions

GROMACS version: 2023.3GROMACS modification: No Hi folks I know that there have been several threads on running Gromacs with Apple M1 and M1 or M2 chips (e.g. Error compiling Gromacs 2023’s checks on Mac M2), but I recently got a MacBook Pro with the M3 chip, so I was interested…

Continue Reading Gromacs 2023.3 on Apple M3 chip – User discussions

Parallel run – User discussions

SP06 December 7, 2023, 4:28am 1 GROMACS version: 2021.4GROMACS modification:Here post your question : Can anyone please help me with how to run a parallel run on my Desktop. I went through the documentation for this but I could not understand it. Please explain how to do parallelization of the…

Continue Reading Parallel run – User discussions

Issue with Accessing Thermo Data in Pylammps – LAMMPS Installation

Dear members of the LAMMPS mailing list, I have been attempting to build and install the LAMMPS Python interface (Pylammps) to leverage the advantages of accessing thermos data. However, I am encountering the following error: Based on my understanding from the documentation, if I set up a simulation with a…

Continue Reading Issue with Accessing Thermo Data in Pylammps – LAMMPS Installation

python – Different behavior in the same conda-pytorch env on different GPUs

Want to improve this question? Add details and clarify the problem by editing this post. I have a project that uses conda env with old pytorch version. It works smoothly if I use Nvidia V100, but it won’t run on other GPUs (I’ve tried RTX3080, TeslaA10, RTX2080TI, TeslaA2, TeslaT4) using…

Continue Reading python – Different behavior in the same conda-pytorch env on different GPUs

AMD extends ROCm 5.7 & PyTorch support to Radeon RX 7900 XT

AMD RX 7900 XT now supported by ROCm 5.7 Three weeks ago, AMD announced that it would support the first RDNA3 GPUs through its ROCm platform for PyTorch. Today, new gaming GPU is being added to this list. AMD is willing to put some money and effort into supporting machine…

Continue Reading AMD extends ROCm 5.7 & PyTorch support to Radeon RX 7900 XT

Nccl_external fails while trying to compile pytroch from source – torch.compile

Hello, I’m trying to compile pytorch from source and encountering the following build error. $ CC=gcc-10 CXX=g++-10 python setup.py develop … [5995/6841] Linking CXX executable bin/HashStoreTest Warning: Unused direct dependencies: /home/netfpga/research/collective/pytorch/build/lib/libc10.so /home/netfpga/anaconda3/envs/pytorch_base/lib/libmkl_intel_lp64.so.1 /home/netfpga/anaconda3/envs/pytorch_base/lib/libmkl_gnu_thread.so.1 /home/netfpga/anaconda3/envs/pytorch_base/lib/libmkl_core.so.1 /lib/x86_64-linux-gnu/libdl.so.2 /home/netfpga/anaconda3/envs/pytorch_base/lib/libgomp.so.1 [5996/6841] Performing build step for ‘nccl_external’ FAILED: nccl_external-prefix/src/nccl_external-stamp/nccl_external-build nccl/lib/libnccl_static.a /home/netfpga/research/collective/pytorch/build/nccl_external-prefix/src/nccl_external-stamp/nccl_external-build /home/netfpga/research/collective/pytorch/build/nccl/lib/libnccl_static.a cd /home/netfpga/research/collective/pytorch/third_party/nccl/nccl &&…

Continue Reading Nccl_external fails while trying to compile pytroch from source – torch.compile

University Positions – Postdoc in GPU algorithms for molecular dynamics with GROMACS

Job description We are looking for an HPC expert to contribute to one or multiple projects related to the GROMACS molecular simulation software. GROMACS is one of the most important HPC applications in the world and runs on all modern CPU and GPU architectures. GROMACS uses a hierarchical parallelization with…

Continue Reading University Positions – Postdoc in GPU algorithms for molecular dynamics with GROMACS

python – Docker with Rstudio & conda virtual env – unable to load R packages

I have this dockerfile: # Use the rocker/rstudio image with R version 4.1.2 FROM rocker/rstudio:4.1.2 # Install deps RUN apt-get update && apt-get install -y \ wget \ bzip2 \ bash-completion \ libxml2-dev \ zlib1g-dev \ libxtst6 \ libxt6 \ libhdf5-dev \ libcurl4-openssl-dev \ libssl-dev \ libfontconfig1-dev \ libcairo2-dev \…

Continue Reading python – Docker with Rstudio & conda virtual env – unable to load R packages

Cuda error with gromacs 2023.3 (CUDA error #700 an illegal memory access) – User discussions

roozi November 7, 2023, 10:16pm 1 GROMACS version:2023.3GROMACS modification: NoI ‘m getting a CUDA error using the standard Gromacs 2023.3 version on my laptop using:NVIDIA RTX-4060 gpu and 20 OpenMP threads (corei7 generation 13).Cuda version: 12.3 nvidia driver version: 545.84 . Os: Ubuntu 22.4.02 (installed on WSL2 , WINDOWS 11)…

Continue Reading Cuda error with gromacs 2023.3 (CUDA error #700 an illegal memory access) – User discussions

HREMD GPU performance – can I run without PLUMED? – User discussions

GROMACS version:2023GROMACS modification: Yes PLUMED MPI CUDAI am running HREMD on a protein-ligand system. I have determined that I need 32 replicas with an effective temperature schedule from 300K to 425K. This gives me an acceptance rate of about 25%. Each simulation is using HMR and a timestep of 4fs….

Continue Reading HREMD GPU performance – can I run without PLUMED? – User discussions

Lammps Error: Illegal pair_style hdnnp – LAMMPS General Discussion

Hi Lammps Community Im trying to do energy minimization of a ternary alloy and the potential i have been using is hdnnp(high dimensional neural network potential). This is just a small task to check the proper working and get familiar with hdnnp. Later on I will be conducting MD simulations…

Continue Reading Lammps Error: Illegal pair_style hdnnp – LAMMPS General Discussion

Optimizing LAMMPS for my purposes – LAMMPS Beginners

Nope, still can’t post.pastebin.com/uXpAuPnySidenote, Pastebin flagged it as potentially harmful…Edit: I could only post it as private, I’m linking here instead. LAMMPS (2 Aug 2023 – Development – patch_2Aug2023-427-g75682ffbca) OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98) using 1 OpenMP thread(s) per MPI task package gpu 0 echo…

Continue Reading Optimizing LAMMPS for my purposes – LAMMPS Beginners

Gromacs MPI + OpenMP build with a single thread – User discussions

miahw November 2, 2023, 4:00pm 1 GROMACS version: 2022.3GROMACS modification: NoHere post your question If I do a hybrid MPI + OpenMP build and set OMP_NUM_THREADS=1, will there be a significant performance difference with a version built purely with MPI only? Even if I only spawn one OpenMP thread, there…

Continue Reading Gromacs MPI + OpenMP build with a single thread – User discussions

A Year in Review: Quansight’s Contributions to PyTorch in 2022 | by Quansight | Quansight | Nov, 2023

2022 was an exciting year for the PyTorch ecosystem. The PyTorch project joining the Linux Foundation was a major milestone, and PyTorch 2.0 was announced with loads of informative talks from the maintainers explaining new features. Additionally, there was marked progress on areas including sparse tensors, JAX-like transformations in PyTorch…

Continue Reading A Year in Review: Quansight’s Contributions to PyTorch in 2022 | by Quansight | Quansight | Nov, 2023

PyTorch Conference: Full Schedule

Integrating an NPU with PyTorch 2.0 Compile – Sol Kim & Juho Ha, FuriosaAIWe would share our experiences of integrating NPU and its compiler, written in Rust, with ‘PyTorch 2.0 Compile’ through the following subjects: 1. Leveraging PyTorch 2.0 Features for NPU Accelerable Model Compilation: We will elaborate on how…

Continue Reading PyTorch Conference: Full Schedule

Keywords in command line for getting good performance in gromacs – User discussions

I am trying to run gromacs 2023.2 gpu on HPC. I am getting super slow performance in HPC. I run the calculation for the test purpose on 8 cores for 10 mins only. The projected steps of calculation is about 15,000 in 10 minuites, unfortunately couldn’t get performance upto mark….

Continue Reading Keywords in command line for getting good performance in gromacs – User discussions

Gromacs 2023 installation issue, slower run with gmx_mpi on multiple nodes – User discussions

GROMACS version: 2023GROMACS modification: NoHere post your question I am trying to install Gromacs 2023 on an HPC and hoping to do it correctly to get highest performance possible. HPC has Dual Intel Xeon Gold Skylake 6154 (3.0 GHz, 18-core) processors and Dual NVIDIA Tesla V100 PCIe 16 GB Computational…

Continue Reading Gromacs 2023 installation issue, slower run with gmx_mpi on multiple nodes – User discussions

Parallel Computing Issue with Lammps Simulation – LAMMPS General Discussion

Dear LAMMPS expert, I’ve been working on a Lammps simulation to model the interaction between N2 molecules and a Si surface in parallel. The simulation is being conducted using Lammps version lamps-20220107 on a cluster computer optimized for parallel computing. Each node in this cluster is equipped with 2 Intel(R)…

Continue Reading Parallel Computing Issue with Lammps Simulation – LAMMPS General Discussion

How to reduce kspace timing% – LAMMPS Beginners

Roger October 25, 2023, 6:39am 1 Dear lammps users, Currently, I’m using LAMMPS version 16 Mar 2018, to run water droplet-silica simulation. I’m using 10 nodes with 40 cpus per node to run a 10nm x 10nm x 10nm system for 1 ns simulation. And the simulation can’t finish within…

Continue Reading How to reduce kspace timing% – LAMMPS Beginners

Install python lammps in a non-standard directory – LAMMPS Installation

izosgi October 23, 2023, 1:20pm 1 Dear all, I am trying to install LAMMPS v2Aug2023.update1 in a cluster.I am using EasyBuild to do this, but it does not succeed to install the python lammps/ directoy into non standard: $LAMMPS_DIR/lib64/python/python3.10/site-package I use cmake and enable python, but the system does not…

Continue Reading Install python lammps in a non-standard directory – LAMMPS Installation

GMX_MPI running – User discussions

Hi everyone!I have some problems with running gmx_mpi on more then one nodes. I have 4 nodes to run with 64 threads, but when i use slurm script with “mpirun gmx_mpi …”, gromacs starts the process on 4 nodes with… 32 cores in total, when 4 nodes will be with…

Continue Reading GMX_MPI running – User discussions

Help with running cp2k in parallel with Slurm

Dear all, I am trying to run cp2k on our HPC cluster and I am new in doing any kind of parallel computing and work on a cluster, so I would appreciate if some help and I apologize if I am missing something obvious. I have a .sif file in…

Continue Reading Help with running cp2k in parallel with Slurm

Different Output from lammps cpu vs gpu

Why energy/pressure are slighly off, what is possible solution for making them equal. Any suggestion/comment? clear package gpu 1 neigh no binsize 5.0 newton off units metal boundary p p p atom_style atomic variable lp equal “2.86” lattice bcc ${lp} orient x 1 0 0 orient y 0 1 0…

Continue Reading Different Output from lammps cpu vs gpu

System-wide installed version of Boost gets picked up during build instead of GROMACS bundled one and results in build failure with GMX_GPU=OpenCL

Our usage of Boost is not compatible with arbitrary Boost versions, which is why we ship the files we require at the exact version we support. However, under certain occasions system Boost can be picked up with higher priority than the bundled one. For some reason, this seems to affect…

Continue Reading System-wide installed version of Boost gets picked up during build instead of GROMACS bundled one and results in build failure with GMX_GPU=OpenCL

installation issue with qiime2-shotgun-2023.9 – Technical Support

I have experienced installation issue with qiime2-shotgun-2023.9please help me to resolve this issue.Collecting package metadata (repodata.json): doneSolving environment: failed ResolvePackageNotFound: gxx_impl_linux-64=13.2.0 libgomp=13.2.0 bracken=2.9 xcb-util-image=0.4.0 keyutils=1.6.1 xkeyboard-config=2.38 jack=1.9.22 libcups=2.3.3 alsa-lib=1.2.8 libsanitizer=13.2.0 libgcc-devel_linux-64=13.2.0 gfortran_impl_linux-64=13.2.0 attr=2.5.1 libstdcxx-ng=13.2.0 xcb-util=0.4.0 gcc_impl_linux-64=13.2.0 libnsl=2.0.0 libstdcxx-devel_linux-64=13.2.0 xcb-util-wm=0.4.1 libcap=2.66 xcb-util-keysyms=0.4.0 pulseaudio=16.1 libsystemd0=252 xcb-util-renderutil=0.3.9 libxkbcommon=1.5.0 _openmp_mutex=4.5 libgfortran-ng=13.2.0 libgcc-ng=13.2.0 libudev1=253 gridss=2.13.2…

Continue Reading installation issue with qiime2-shotgun-2023.9 – Technical Support

Parallel Computing on Agate | The Minnesota Supercomputing Institute

Location & Details 575 Walter Library and online About This Event:  In this tutorial, we will give an overview of the Agate cluster resources, the newest high performance computing cluster at MSI. We will walk through examples that use SLURM job arrays, as well as the main two ways to…

Continue Reading Parallel Computing on Agate | The Minnesota Supercomputing Institute

Gromacs: src/gromacs/utility Directory Reference

Directories directory   tests   Unit tests for Low-Level Utilities (utility).   Files file   alignedallocator.cpp   Implements AlignedAllocator.   file   alignedallocator.h   Declares allocation policy classes and allocators that are used to make library containers compatible with alignment requirements of particular hardware, e.g. memory operations for SIMD or…

Continue Reading Gromacs: src/gromacs/utility Directory Reference

Building LAMMPS for Nvidia GraceHopper nodes – LAMMPS Installation

I am trying to build LAMMPS with Kokkos for HOPPER90. However, I am running compilation errors, with nvc as well as with gcc. With g++13.1.0 I am getting the error: extern _Float32 modff32 (_Float32 __x, _Float32 *__iptr) noexcept (true); extern _Float32 __modff32 (_Float32 __x, _Float32 *__iptr) noexcept (true) __attribute__ ((__nonnull__…

Continue Reading Building LAMMPS for Nvidia GraceHopper nodes – LAMMPS Installation

bash – Dynamically change –cpus-per-task between slurm jobs

I am trying to run a bunch of parallel programmes on a cluster, where I vary the number of CPU:s between jobs. I tried to use the SLURM_ARRAY_SUBMIT_ID to achieve this, which one gets when using #SBATCH –array. My current code right now looks like this, although I already tried…

Continue Reading bash – Dynamically change –cpus-per-task between slurm jobs

Optimizing the runtime – LAMMPS General Discussion

Hello,I am running a simulation with about 300000 atoms and it takes almost three days to run on 480 cores…I want to optimize it but I’m pretty new to it.The MPI task timing breakdown shows that ‘Pair’ takes about 50%, and ‘Comm’ about 30% and ‘Modify’ about 20%.Does this ‘Comm’…

Continue Reading Optimizing the runtime – LAMMPS General Discussion

Mdrun : An error occurred in MPI_Allreduce – User discussions

AKA October 1, 2023, 9:58pm 1 GROMACS version: 2022.6GROMACS modification: No I am running gromacs-cp2k and I get the following error when running mdrun on qmmm system. 🙂 GROMACS – gmx mdrun, 2022.6 (-: Executable: /usr/local/gromacs-cp2k-gpu/bin/gmx_mpi Data prefix: /usr/local/gromacs-cp2k-gpu Working dir: /home/vivek/Desktop/cp2k_test/tutorial/egfp Command line: gmx_mpi mdrun -s egfp-qmmm-nvt.tpr -deffnm egfp-qmmm-nvt…

Continue Reading Mdrun : An error occurred in MPI_Allreduce – User discussions

Installing PyTorch Geometric w.r.t. CUDA Version

I’ve been fiddling with PyTorch Geometric installation lately, and I believe we all know that although being an awesome library, sometimes it can be notoriously hard to get it to work in the first place. Especially if your CUDA version is way too low. That can be very, very problematic….

Continue Reading Installing PyTorch Geometric w.r.t. CUDA Version

CUDA enabled Gromacs Cp2k installation error – User discussions

AKA September 27, 2023, 2:02pm 1 GROMACS version:2023.2GROMACS modification: NoCP2K version: 2023.2Hello all,I was trying to install gromacs with cp2k on nvidia GPU. I installed the psmp version of “local_cuda” cp2k. The cmake command I used is present is: cmake .. -DBUILD_SHARED_LIBS=OFF -DGMXAPI=OFF -DGMX_INSTALL_NBLIB_API=OFF -DGMX_GPU=CUDA -DGMX_CP2K=ON -DCP2K_DIR=/home/aka/Documents/cp2k-2023.2/lib/local_cuda/psmp -DCMAKE_PREFIX_PATH=’/home/aka/Documents/cp2k-2023.2/tools/toolchain/install/openblas-0.3.23;/home/aka/Documents/cp2k-2023.2/tools/toolchain/install/scalapack-2.2.1′ -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs-cp2k-gpu/…

Continue Reading CUDA enabled Gromacs Cp2k installation error – User discussions

5 Best Libraries in C/C++ For ML in 2023

Being a compiled language, C++, the go-to framework for developers, can translate into machine code before execution, making it ideal for computationally intensive jobs like training large neural networks. Its robust memory management provides optimisation opportunities for machine learning algorithms. Moreover, it seamlessly integrates with other tools such as CUDA…

Continue Reading 5 Best Libraries in C/C++ For ML in 2023

Ninja: build stopped: subcommand failed. note: see declaration of ‘PyDict_GetItem’

Hello community! This is my first post so I apologize if I am doing something wrong. I am running into an issue when trying to build pytorch-1.12.1 from source. I have search the web for a solution but cannot find one. I am not familiar with C++ so this stuff…

Continue Reading Ninja: build stopped: subcommand failed. note: see declaration of ‘PyDict_GetItem’

[slurm-users] Submitting hybrid OpenMPI and OpenMP Jobs

Hello, for this setup it typically helps to disable MPI process binding with “mpirun –bind-to none …” (or similar) so that OpenMP can use all cores. Best, Martin On 22/09/2023 13:57, Selch, Brigitte (FIDD) wrote: > Hello, > > one of our applications need hybrid OpenMPI and OpenMP Job-Submit. > > Only one task…

Continue Reading [slurm-users] Submitting hybrid OpenMPI and OpenMP Jobs

CP2K version 2023.2 compile error

Thank you for helping me, Mishra. However, I still got problems. I modified the script according to my architecture, as below:“spack -d install cp2k@2023.1+elpa %g…@8.3.1 target=icelake ^elpa+openmp ^ope…@4.1.4 fabrics=auto”Then I got very, very long error message below and I dont’ know what is the reason. ———————————————————————————————————————————————————————————– ==> [2023-09-22-11:22:05.969419] Error: ProcessError:…

Continue Reading CP2K version 2023.2 compile error

Highly inflated p-values in GWAS by regenie

Highly inflated p-values in GWAS by regenie 0 I was running a GWAS using REGENIE 3.2.5 on more than 250,000 samples, and the p-values returned are highly inflated with -log10P up to 5000. As a result there were over 10,000 variants called significant under the threshold of p < 5e-8,…

Continue Reading Highly inflated p-values in GWAS by regenie

Gromacs +cp2k installation – User discussions

GROMACS version: 2023.1 Hello. I am trying to compile gromacs-2023.1 with cp2k-2023.2, but I am not being able to link the fftwf3 library. So when I run the following command: cmake … -DBUILD_SHARED_LIBS=OFF -DGMXAPI=OFF -DGMX_INSTALL_NBLIB_API=OFF -DGMX_DOUBLE=ON -DGMX_FFT_LIBRARY=fftw3 -DFFTWF_LIBRARY=/home/edivaldo/cp2k/tools/toolchain/install/fftw-3.3.10/lib -DFFTWF_INCLUDE_DIR=/home/edivaldo/cp2k/tools/toolchain/install/fftw-3.3.10/include -DGMX_BLAS_USER=/home/edivaldo/cp2k/tools/toolchain/install/openblas-0.3.23/lib/libopenblas.a -DGMX_LAPACK_USER=/home/edivaldo/cp2k/tools/toolchain/install/scalapack-2.2.1/lib/libscalapack.a -DGMX_CP2K=ON -DCP2K_DIR=“/home/edivaldo/cp2k/lib/local/psmp” -DGMX_MPI=on I keep getting this warning:– The…

Continue Reading Gromacs +cp2k installation – User discussions

How to run delly with multi-threading mode?

How to run delly with multi-threading mode? 1 Hi, I am wondering how to run delly with multi-threading mode? Actually there is a guideline on the manually page, which indicates to use openMP API to do this. But the manually guideline is too simple to follow for me. This is…

Continue Reading How to run delly with multi-threading mode?

Segmentation fault (core dumped) during NVT md – User discussions

Miriam September 18, 2023, 7:06am 1 GROMACS version:GROMACS modification: Yes/NoHere post your question HiI am trying to run md with nvt conditions (after making a box, adding solvent and ions).and this error comes up: Using 1 MPI processUsing 112 OpenMP threads Step 0, time 0 (ps) LINCS WARNINGrelative constraint deviation…

Continue Reading Segmentation fault (core dumped) during NVT md – User discussions

GROMACS SYCL for Intel GPU – User discussions

GROMACS version: 2023.2GROMACS modification: Yes/NoHi,I’ve just install GROMACS with SYCL enabled, however, it seems like GROMACS cannot detect my GPU. My GPU is Intel Iris Xe Graphics, I’m using Ubuntu 22.04, CMake 3.27.4, and Intel oneAPI DPC++/C++ Compiler with MKL library 2023.2 for SYCL. Here it says SYCL is enabled:…

Continue Reading GROMACS SYCL for Intel GPU – User discussions

Building Pytorch – Missing Symbols – deployment

Cyberes September 7, 2023, 6:09am 1 I’m working on compiling PyTorch in my Dockerfile and running into a strange issue where the compiled libtorch.so only contains 4 symbols: ~ $ nm -D /opt/conda/lib/python3.9/site-packages/torch/lib/libtorch.so w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable w __cxa_finalize w __gmon_start__ Compare that to the libtorch.so from pip: U __cxa_allocate_exception…

Continue Reading Building Pytorch – Missing Symbols – deployment

Installation – environment file not found – Technical Support

rjay428 (Rylee Jensen) September 5, 2023, 10:34pm 1 Hi all! Brand new to QIIME2 and with the Python environment as well. I’m running into an issue during the installation process that is close to what is posted on here, but I haven’t been able to figure out this exact problem….

Continue Reading Installation – environment file not found – Technical Support

Fatal error: Unexpected cudaStreamQuery failure. CUDA error #700 (cudaErrorIllegalAddress) – User discussions

GROMACS version:2022.3GROMACS modification: NoHere post your question I am getting the following error when running gmx mdrun (for equilibrating a lipid+water system) using the GPU. How can I circumvent this? 🙂 GROMACS – gmx mdrun, 2022.3 (-: Executable: /usr/local/GROMACS/GROMACS-2022.3-CUDA/bin/gmxData prefix: /usr/local/GROMACS/GROMACS-2022.3-CUDAWorking dir: /scratch/psarngadha/30726Command line:gmx mdrun -nt 8 -nb gpu -bonded…

Continue Reading Fatal error: Unexpected cudaStreamQuery failure. CUDA error #700 (cudaErrorIllegalAddress) – User discussions

GROMACS Installation in Centralised Supercomputing Facility – User discussions

GROMACS version: 2021.4GROMACS modification: Yes/NoCan anyone guide me a systematic installation of GROMACS 2021.4 in Central Supercomputing Facility ?I want to install GROMACS into the centralised supercomputer, where I wouldn’t have any “administrative” rights. I tried to install GROMACS 2021.4 version into my user of CDAC Param – Siddhi Supercomputing…

Continue Reading GROMACS Installation in Centralised Supercomputing Facility – User discussions

Fails to build binary packages again after successful build

Source: lammps Version: 20220106.git7586adbb6a+ds1-2 Severity: minor Tags: trixie sid ftbfs User: lu…@debian.org Usertags: ftbfs-binary-20230816 ftbfs-binary-after-build User: debi…@lists.debian.org Usertags: qa-doublebuild Hi, This package fails to do build a binary-only build (not source) after a successful build (dpkg-buildpackage ; dpkg-buildpackage -b). This is probably a clear violation of Debian Policy section 4.9 (clean…

Continue Reading Fails to build binary packages again after successful build

Unable to patch PLUMED with INTEL package – LAMMPS Installation

mence August 8, 2023, 7:23pm 1 Hello, I am trying to patch LAMMPS 29Sep2021 with PLUMED. As it stands, I have managed to get PLUMED working with LAMMPS but I am trying to squeeze some extra performance by using Intel acceleration. To get plumed running on the HPC cluster, I…

Continue Reading Unable to patch PLUMED with INTEL package – LAMMPS Installation

PyTorch 2.0 is 3x slower than 1.11 on very simple example?

Adding more info (torch2) > python -c “import torch;print(torch.__config__.show(), torch.cuda.get_device_properties(0))” PyTorch built with: – GCC 9.3 – C++ Version: 201703 – Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture applications – Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e) – OpenMP 201511 (a.k.a. OpenMP 4.5) – LAPACK…

Continue Reading PyTorch 2.0 is 3x slower than 1.11 on very simple example?

Cornell Virtual Workshop

Steve Lantz (2021 author), Aaron Birkland (2014 author) Cornell Center for Advanced Computing Revisions: 8/2021, 4/2014 (original) This topic describes several advanced job submission techniques having different characteristics. These submission techniques are largely independent from the runtime environment they create, so each of these techniques may be used with…

Continue Reading Cornell Virtual Workshop

GROMACS on Bridges-2 | PSC

GROMACS on Bridges-2: Optimizing Job Scripts for Performance and Efficiency June 9, 2023 2:00 pm – 3:00 pm Eastern time   Join us for this webinar describing how to optimize performance of GROMACS on Bridges-2. Mitchell Dorrell, Pittsburgh Supercomputing Centermwd@psc.edu Abstract There are many factors that affect the performance of…

Continue Reading GROMACS on Bridges-2 | PSC

ARCHER2 Weekly Newsletter

By ARCHER2 Service on July 26, 2023 Tags: newsletters 

Continue Reading ARCHER2 Weekly Newsletter

Gromacs installation with cp2k – User discussions

vikas July 25, 2023, 12:40pm 1 GROMACS version: 2021.4GROMACS modification: NoBelow is the cmake command I am using:cmake … -DBUILD_SHARED_LIBS=OFF -DGMXAPI=OFF -DGMX_INSTALL_NBLIB_API=OFF -DGMX_DOUBLE=ON -DGMX_FFT_LIBRARY=fftw3 -DFFTWF_LIBRARY=$cp2k_install/fftw-3.3.10/lib/ -DFFTWF_INCLUDE_DIR=$cp2k_install/fftw-3.3.10/include -DGMX_BLAS_USER=$cp2k_install/openblas-0.3.20/lib/ -DGMX_LAPACK_USER=$cp2k_install/scalapack-2.1.0/lib/ -DGMX_CP2K=ON -DCP2K_DIR=$cp2k_home/lib/local/psmp/ -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_cp2k -DGMX_LIBS_SUFFIX=_cp2k -DCMAKE_INSTALL_PREFIX=/home/chemistry/phd/cyz208667/Softwares/new_cp2k/gmx_install -DCP2K_LINKER_FLAGS=“-Wl,–enable-new-dtags -L/home/soft/centOS/compilers/gcc/openmpi/4.1.4/lib -Wl,-rpath -Wl,/home/soft/centOS/compilers/gcc/openmpi/4.1.4/lib -Wl,–enable-new-dtags -L’/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/openblas-0.3.20/lib’ -Wl,-rpath=‘/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/openblas-0.3.20/lib’ -L’/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/fftw-3.3.10/lib’ -Wl,-rpath=‘/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/fftw-3.3.10/lib’ -L’/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/libint-v2.6.0-cp2k-lmax-5/lib’ -L’/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/libxc-5.2.3/lib’ -Wl,-rpath=‘/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/libxc-5.2.3/lib’ -L’/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/libxsmm-1.17/lib’ -Wl,-rpath=‘/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/libxsmm-1.17/lib’ -L’/home/chemistry/phd/cyz208667/Softwares/new_cp2k/cp2k-2022.1/tools/toolchain/install/scalapack-2.1.0/lib’ LIBS=…

Continue Reading Gromacs installation with cp2k – User discussions

Gmx mdrun (with OpenCL) no GPU detected – User discussions

GROMACS version:2023.2GROMACS modification: Yes/NoHere post your question Executable: /usr/local/gromacs-2023.2/bin/gmxData prefix: /usr/local/gromacs-2023.2Working dir: /home/1/tutegmxCommand line:gmx –version GROMACS version: 2023.2Precision: mixedMemory model: 64 bitMPI library: thread_mpiOpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)GPU support: OpenCLNB cluster size: 8SIMD instructions: AVX2_256CPU FFT library: Intel MKL version 2023.0.1 Build 20230303GPU FFT library: clFFTMulti-GPU FFT: noneRDTSCP usage:…

Continue Reading Gmx mdrun (with OpenCL) no GPU detected – User discussions

Lammps – LAMMPS Beginners – Materials Science Community Discourse

Hello. I have started using Lammps on Ubuntu. The simulation works and I get the expected results. However, there are some warnings which I can’t fix. One is: hwloc/linux: Ignoring PCI device with non-16bit domain.Pass –enable-32bits-pci-domain to configure to support such devices(warning: it would break the library ABI, don’t enable…

Continue Reading Lammps – LAMMPS Beginners – Materials Science Community Discourse

Gromacs OpenCL incompatible GPUs – User discussions

GROMACS version: 2022GROMACS modification: No I am attempting to run Gromacs with OpenCL, but it seems that GPUs are considered as incompatible. My hardware is: AMD EPYC 7643 48-Core + 8 NVIDIA RTX A6000 GPUs. I have built Gromacs with: cmake … -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=OpenCL clinfo: Number of platforms 1Platform…

Continue Reading Gromacs OpenCL incompatible GPUs – User discussions

Error Message reading data – LAMMPS General Discussion

Hello!!! Giving a bit of context, what I want to achieve with this code(in.eam – Google Drive, in.eam (1.5 KB)) is to collide a copper nanoparticle with a nickel surface (I obtained these files from Ovito, so I’m using read_data Cu.data (18.1 KB), Ni.data – Google Drive )). However, I…

Continue Reading Error Message reading data – LAMMPS General Discussion

main-amd64-default][science/lammps] Failed for lammps-2022.06.23.1_7 in build

You are receiving this mail as a port that you maintain is failing to build on the FreeBSD package build server. Please investigate the failure and submit a PR to fix build. Maintainer: y…@freebsd.org Log URL: pkg-status.freebsd.org/beefy18/data/main-amd64-default/p8fb94260154e_s510fd83138/logs/lammps-2022.06.23.1_7.log Build URL: pkg-status.freebsd.org/beefy18/build.html?mastername=main-amd64-default&build=p8fb94260154e_s510fd83138 Log: =>> Building science/lammps build started at Fri Jul 14…

Continue Reading main-amd64-default][science/lammps] Failed for lammps-2022.06.23.1_7 in build

Subject:[QIIME2.2023.5] Need help with Qiime2 installation: ResolvePackageNotFound error – Technical Support

Subject: Need help with Qiime2 installation: ResolvePackageNotFound error Dear Qiime2 Community, I hope this message finds you well. I am currently facing an issue during the installation of Qiime2 and would greatly appreciate your assistance in resolving it. During the installation process, after following the Qiime2 instructions, I encountered the…

Continue Reading Subject:[QIIME2.2023.5] Need help with Qiime2 installation: ResolvePackageNotFound error – Technical Support

Mdrun Module of nvt Wrote Unusual pdb Files – User discussions

GROMACS version: 2023.1GROMACS modification: Yes/No Hello. I am performing MD simulation of a pure 500 molecules of ionic liquid. The pdb files of the cation and anion are written separately. Packmol is used to packed 500 cations and anions together in a single pdb file. The packmol pdb file was…

Continue Reading Mdrun Module of nvt Wrote Unusual pdb Files – User discussions

ARM 23.04 compilers generate incorrect code for GROMACS from -02 – High Performance Computing (HPC) forum – Support forums

I faced several issues when using the latest ARM 23.04.1 compilers with GROMACS on Fugaku (aka A64fx aka SVE 512bits) This is the most problematic one. FWIW ARM compilers 23.1 works great, and even with -Ofast The issue can be evidenced with the latest GROMACS 2023.1 and the regression test…

Continue Reading ARM 23.04 compilers generate incorrect code for GROMACS from -02 – High Performance Computing (HPC) forum – Support forums

nvcc fatal: Unsupported gpu architecture ‘compute_86’

Trying to install on the clean docker with cuda 11.0, pytorch 1.7 cu11 on new rtx3070 card I am getting the following error during build: nvcc fatal: Unsupported gpu architecture ‘compute_86’ Is it possible to add compute_86 ? My env stack is: ———————- ————————————————————— sys.platform linux Python 3.6.9 (default, Jul…

Continue Reading nvcc fatal: Unsupported gpu architecture ‘compute_86’

Optimizing LibTorch-based inference engine memory usage and thread-pooling

by Himalay Mohanlal Joriwal, Pierre-Yves Aquilanti, Vivek Govindan, Hamid Shojanazeri, Ankith Gunapal, Tristan Rice Outline In this blog post we show how to optimize LibTorch-based inference engine to maximize throughput by reducing memory usage and optimizing the thread-pooling strategy. We apply these optimizations to Pattern Recognition engines for audio data, for…

Continue Reading Optimizing LibTorch-based inference engine memory usage and thread-pooling

An “Assertion failed” error occurs in GROMACS 2023.1 when CUDA Graphs feature and gmx mdrun “-bonded gpu” argument available at the same time

Summary Hi, An “Assertion failed” error occurs when I attempt to set CUDA Graphs feature and gmx mdrun “-bonded gpu” argument available at the same time in GROMCACS 2023.1. Here is environment setting about GROMACS: #Gromacs export GMX_GPU_DD_COMMS=true export GMX_CUDA_GRAPH=true export GMX_GPU_PME_DECOMPOSITION=true export GMX_GPU_PME_PP_COMMS=true export GMX_FORCE_UPDATE_DEFAULT_GPU=true And, Here are error…

Continue Reading An “Assertion failed” error occurs in GROMACS 2023.1 when CUDA Graphs feature and gmx mdrun “-bonded gpu” argument available at the same time

Segmentation fault – core dumped – User discussions

GROMACS version: 2023.1GROMACS modification: Yes/No I am currently trying the Martini M3 tutorial (M3-tutorials), which basically starts with the a CG simulation of a soluble protein. I have done everything according to the tutorial, but when i want to perform a simple energy minimization i get following error: gmx mdrun…

Continue Reading Segmentation fault – core dumped – User discussions

creating conda environment from snakemake rule

I am trying to activate a conda environment from snakemake to use a different environment for a rule in the workflow. I created a .yml file to specify dependencies. This is the snakemake rule I defined: rule Merge_VCFs: input: f1=’file1′ f2=’file2′ output: vcf=”output_file” conda: “bcftools_env.yml” shell: “”” bcftools merge -m…

Continue Reading creating conda environment from snakemake rule

HPC 850 – Slurm Tutorial

Tutorial Reference University of Innsbruck 1. Submitting jobs (sbatch) The command sbatch is used to submit jobs to the batch-system using the following syntax: sbatch [options] [job_script.slurm [ job_script_arguments …]] where job_script.slurm represents the (relative or absolute) path to a simple shell script containing the commands to be run on…

Continue Reading HPC 850 – Slurm Tutorial

Bioconductor – ompBAM

DOI: 10.18129/B9.bioc.ompBAM   This package is for version 3.16 of Bioconductor; for the stable, up-to-date release version, see ompBAM. C++ Library for OpenMP-based multi-threaded sequential profiling of Binary Alignment Map (BAM) files Bioconductor version: 3.16 This packages provides C++ header files for developers wishing to create R packages that processes…

Continue Reading Bioconductor – ompBAM

saturn-python-rapids | Saturn Cloud

2022.01.06 Last updated: 2022-05-16 22:42:30 Package Version Channel _libgcc_mutex 0.1 conda-forge _openmp_mutex 4.5 conda-forge abseil-cpp 20210324.1 conda-forge aiobotocore 2.1.0 conda-forge aiohttp 3.8.1 conda-forge aioitertools 0.10.0 conda-forge aiosignal 1.2.0 conda-forge alsa-lib 1.2.3.2 conda-forge anyio 3.6.1 conda-forge appdirs 1.4.4 conda-forge argon2-cffi 21.3.0 conda-forge argon2-cffi-bindings 21.2.0 conda-forge arrow-cpp 1.0.1 conda-forge arrow-cpp-proc 3.0.0 conda-forge…

Continue Reading saturn-python-rapids | Saturn Cloud

Understanding performance metrics – LAMMPS Beginners

BKFCAW June 16, 2023, 2:49pm 1 I’m running a 3 Nov 2022 build on an AMD 8-core (16-thread) machine at home, as well as on the SDSC Expanse supercomputer with more cores available. At SDSC, I’ve run with 16, 32 and 64 cores. The efficiency in each run is about…

Continue Reading Understanding performance metrics – LAMMPS Beginners

Temperature keeps coming up to 0K – LAMMPS General Discussion

Dear all, Hi. I’m having trouble setting and checking the temperature using npt ensemble. This is my model. After setting the top half to “top” and the bottom half to “bottom” using region and group, I want to set the upper and lower temperatures separately and then check the each…

Continue Reading Temperature keeps coming up to 0K – LAMMPS General Discussion

Downloading qiime2 on Ubuntu/WSL – Technical Support

Hello, I have been having issues installing qiime2 on windows. I read through the instructions on the website that said that downloading Windows subsystem for linux would be necessary. I also checked the forums and Francisco Cardenas provided a good guide up to a point. So following the Windows (via…

Continue Reading Downloading qiime2 on Ubuntu/WSL – Technical Support

Increasing and excesive use of memory using OpenGL and AMD GPU – User discussions

swong June 14, 2023, 11:52am 1 GROMACS version: 2023.1GROMACS modification: Yes/NoHere post your question: I successfully compiled on a AMD/GPU node cluster. Each node has 8 gpus. So I’m running 8 simultaneous simulations on each node. As the simulations proceed the memory usage keeps increasing until some of the jobs…

Continue Reading Increasing and excesive use of memory using OpenGL and AMD GPU – User discussions

Does openMP respect LD_PRELOAD? – User discussions

lune June 8, 2023, 4:49pm 1 GROMACS version:2023.1GROMACS modification: No I’m currently profiling gromacs-2023.1 with CUDA acceleration. I’m trying to trace UVM page faults using NVIDIA nsight systems with a custom cudaMalloc shim library. It seems however, that GMX doesn’t interact with the CUDA API itself, and instead openMP forks…

Continue Reading Does openMP respect LD_PRELOAD? – User discussions

MDRUN crash during gREST simulation under NVT ensemble – User discussions

GROMACS version: 2022.3GROMACS modification: No Hello everyone. I am currently using GROMACS ver. 2022.3 patched with PLUMED ver. 2.8.1., and I’ve been trying to replicate the gREST simulations from Oshima et al. (J. Chem. Inf. Model. 2020, 60, 11, 5382–5394) using GROMACS. In the paper, the authors perform gREST simulation…

Continue Reading MDRUN crash during gREST simulation under NVT ensemble – User discussions

Beginner’s guide to Slurm | Center for High Performance Computing

The CHPC uses Slurm to manage resource scheduling and job submission. Users submit jobs on the login node. The queueing system, also known as the job scheduler, will determine when and where to run your jobs. Slurm will factor in the computational requirements of the job, including (but not limited…

Continue Reading Beginner’s guide to Slurm | Center for High Performance Computing

Compiling GROMACS 2023 with Intel LLVM compilers – User discussions

Erik May 24, 2023, 5:01pm 1 GROMACS version: 2023.1GROMACS modification: No I’m trying to compile GROMACS 2023 using Intel’s LLVM compilers and have run into a number of issues. The first is that they don’t seem to play nicely with nvcc because of a bug with the way nvcc sets…

Continue Reading Compiling GROMACS 2023 with Intel LLVM compilers – User discussions

python – Training VIT-Adapter on custom dataset

I’m training on mask2former_beit_adapter_large_896_80k_ms model with custom dataset and I have 2 issues that I’m having hard time to figure them out. I tried running train.py but after each epoch, when validation starts, it is slower than training speed which I thought it was weird and not sure what’s the…

Continue Reading python – Training VIT-Adapter on custom dataset

Low Performance due to low utilisation of GPU – User discussions

GROMACS version: 2023GROMACS modification: No My desktop has Intel(R) Core™ i9-10900K CPU @ 3.70GHz processor and Nvidia RTX 4090 GPU. This is gromacs version installed on my systemgmx –version GROMACS version: 2023Precision: mixedMemory model: 64 bitMPI library: thread_mpiOpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)GPU support: CUDANB cluster size: 8SIMD instructions: AVX2_256CPU…

Continue Reading Low Performance due to low utilisation of GPU – User discussions

The jetson nano device reports an error using from torch.profiler import profile – Jetson Nano

After deploying the pytorch environment on jetson nano, an error is reported when using pytorch efficiency analysis make sure PyTorch is built with USE_KINETO=1″“”AssertionError: Requested Kineto profiling but Kineto is not available,make sure PyTorch is built with USE_KINETO=1 code show as belowtime_test.py (1.6 KB) Hi @18981275647, which PyTorch wheel did…

Continue Reading The jetson nano device reports an error using from torch.profiler import profile – Jetson Nano

Unable to create environment – Technical Support

Tried to create an environment using Conda and was not able to do so. Have copy pasted the message below. Would be grateful to know what the issue is and how to resolve the issue. (base) C:\Users\Mathangi Janakiraman>wget data.qiime2.org/distro/core/qiime2-2023.2-py38-linux-conda.yml–2023-05-11 12:54:47– data.qiime2.org/distro/core/qiime2-2023.2-py38-linux-conda.ymlResolving data.qiime2.org (data.qiime2.org)… 54.200.1.12Connecting to data.qiime2.org (data.qiime2.org)|54.200.1.12|:443… connected.ERROR: cannot verify…

Continue Reading Unable to create environment – Technical Support

Introducing Slurm | Princeton Research Computing – SLURM Examples –

OUTLINE   On total of the cluster systems (except Nobel and Tigressdata), addicts run programs of submitting scripts into the Slurm mission scheduler. A Slurm script must execute three-way things: prescribe the resource requirements for the workplace set the environment specify the work to be carrying going in the form of cup commands Below…

Continue Reading Introducing Slurm | Princeton Research Computing – SLURM Examples –

python – Error while running computeMatrix command in Deeptools

I am trying to get computeMatrix for bigwig file in specific genomic region using deeptools.Below is the code I am using computeMatrix reference-point –referencePoint TSS \ -b 1000 -a 1000 \ -R ~/Desktop/ATAC/ATAC/Inducible_elements_Greenberg.bed \ -S ~/Desktop/ATAC/Control.mRp.clN.bigWig \ –skipZeros \ -o ~/Desktop/ATAC/matrix_controlmRPATACBasalcomputematrix.gz \ But I get the following error File “/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/numpy/__init__.py”,…

Continue Reading python – Error while running computeMatrix command in Deeptools

pytorch – I’m having trouble applying mmdetection

I made the custom format dataset follow the MS coco format. Then, I placed the jpg image where the json file is. The train dataset file structure is as follows. mmdetection/data/coco/train/annotation_data/, mmdetection/data/coco/train/image_data test dataset follows.. mmdetection/data/coco/test/image_data, no annotaion data! val dataset follows.. mmdetection/data/coco/val/annotation_data, mmdetection/data/coco/val/image_data The annotaion json file is organized…

Continue Reading pytorch – I’m having trouble applying mmdetection

bash – Conflicts between Snakemake and GROMACS?

I tried to simplify my issue as much as possible and still getting the error. The whole idea is that I want to execute (inside a much more complex workflow) the command: gmx mdrun -nt 12 -deffnm emin -cpi on a cluster. For that I have a conda environment with…

Continue Reading bash – Conflicts between Snakemake and GROMACS?

Regression test failure – User discussions

GROMACS version 2023.1 Dear GROMACS forum I having an issue with the regression tests following a test install of GROMACS 2023.1 on Ubuntu 22.04.02 LTS on a basic desktop. The installation step has worked without a hitch the output from gmx mdrun -version is pasted below. During the regression tests…

Continue Reading Regression test failure – User discussions

Accelerated Image Segmentation using PyTorch

by Intel Using Intel® Extension for PyTorch to Boost Image Processing Performance PyTorch delivers great CPU performance, and it can be further accelerated with Intel® Extension for PyTorch. I trained an AI image segmentation model using PyTorch 1.13.1 (with ResNet34 + UNet architecture) to identify roads and speed limits from…

Continue Reading Accelerated Image Segmentation using PyTorch

Several RDNA 3 Updates Including Better Radeon RX 7000 Support On Linux

AMD has released the new ROCm 5.5 GPU compute stack for the open-source Linux community which adds improved RDNA 3 support. ROCm 5.5 releases with several new updates and provides better support to the new AMD RDNA 3 architecture This update brings changes and better support for the Radeon RX…

Continue Reading Several RDNA 3 Updates Including Better Radeon RX 7000 Support On Linux

Why is CUDA unavailable in anaconda environment with pytorch even though its installed successfully, and works in python?

I Installed CUDA on my system (windows 10), then also installed pytorch. I then created an Anaconda environment and installed the same into it as well. Now my gpu is recognized and is set to available in torch when running through python. and the necessary cuda installation is there within…

Continue Reading Why is CUDA unavailable in anaconda environment with pytorch even though its installed successfully, and works in python?

Gromacs 2023 GPU support not working for some reason – User discussions

GROMACS version: 2023GROMACS modification: NoHere post your questionSo I have beee trying to get GPU support for gromacs in my WSL2 Ubuntu system, for a few days and have come up short so I thought I’d try here. I have already run the comands to compile gromacs form source code…

Continue Reading Gromacs 2023 GPU support not working for some reason – User discussions

Cuda illegal memory access(kokkos) when using multiple GPUs – LAMMPS Development

Dear all, I have encountered cuda illegal memory access(lib kokkos) when using multiple GPUs. The system is mixture of 2 beads, 3 beads, and 100 beads chains with harmonic bond and angle potential style. The atom style is set as angle and the pair style is lj/expand. The exact same…

Continue Reading Cuda illegal memory access(kokkos) when using multiple GPUs – LAMMPS Development

Not able to install older pytorch version – vision

Hi, I am getting some conflicts when I am trying to install some older version of pytorch. Using the command “conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch”. Also in the end pytorch is not getting installed. Below is what is printed on the terminal after running the above command….

Continue Reading Not able to install older pytorch version – vision

GROMACS 2023.1 release notes – GROMACS 2023.1 documentation

This version was released on April 21st, 2023. These release notes document the changes that have taken place in GROMACS since the previous 2023 version, to fix known issues. It also incorporates all fixes made in version 2022.5 and earlier, which you can find described in the Release notes. Fixes…

Continue Reading GROMACS 2023.1 release notes – GROMACS 2023.1 documentation

I5-cp2k Benchmarks – OpenBenchmarking.org

Intel Core i5-12500T testing with a HP 894F (U21 Ver. 02.06.00 BIOS) and llvmpipe on Rocky Linux 8.7 via the Phoronix Test Suite. Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304214-NE-I5CP2K70437 HP Mini i5-12500T cp2k Processor: Intel Core…

Continue Reading I5-cp2k Benchmarks – OpenBenchmarking.org

LAMMPS produce memory error with ReaxFF and OpenMP – LAMMPS Development

jhill April 19, 2023, 2:00pm 1 When I try to run attached LAMMPS input with:lmp -sf omp -pk omp 12 -in input.datI get a segmentation fault (LAMMPS 28Mar2023). Without OpenMP or on a GPU the calculation runs. There is nothing attached to your message. Please try the KOKKOS package instead…

Continue Reading LAMMPS produce memory error with ReaxFF and OpenMP – LAMMPS Development

Suggestions for optimal task splitting on 8 x RTX2080ti – User discussions

asente April 17, 2023, 8:19pm 1 GROMACS version: 2021.4GROMACS modification: NoHere post your question Dear All, I’m trying to set up an MD simulation of a fairly large membrane protein (~300k atoms in the system) and would be grateful if somebody could provide suggestions on how to improve the performance…

Continue Reading Suggestions for optimal task splitting on 8 x RTX2080ti – User discussions