We have installed VASP 6.4.3 in my dell laptop and our CDAC paramshavak. Tests ran well in both systems but calculations run too slow in the CDAC paramshavak. we have tried the recommended NCORE=2 and also tried the NCORE=28 but calculations remain very slow. However, it's running fine in my laptop which is an inferior system compared to CDAC paramshavak. We have reinstalled it in paramshavak and checked everything we could. please help us with the issue
calculations running too slow on our CDAC Paramshavak system but running fine in my dell laptop
Moderators: Global Moderator, Moderator
-
- Newbie
- Posts: 5
- Joined: Fri Jan 27, 2023 7:42 am
-
- Global Moderator
- Posts: 153
- Joined: Thu Nov 03, 2022 1:03 pm
Re: calculations running too slow on our CDAC Paramshavak system but running fine in my dell laptop
Dear fakir_chand1,
Could you provide us with an example of the calculations that you claim are too slow? We would also require information on the modules, makefile.include, and compilation script that you used when installing VASP.
Kind regards,
Pedro
-
- Newbie
- Posts: 5
- Joined: Fri Jan 27, 2023 7:42 am
Re: calculations running too slow on our CDAC Paramshavak system but running fine in my dell laptop
Thanks for your kind attention sir
A few examples of such calculations would be geometric optimization of materials like MoS2, CrS2, WS2, PtS2, PtSe2. These are very small systems having 3 atoms in their respective unit cells for which calculations are slow. But for their supercells, say 5*5*1, calculations are unbelievably slow and would take days for a calculation which should be possible in few hours. This slow speed issue is occuring only in CDAC Paramshavak.
Kind regards,
Us.
-
- Newbie
- Posts: 5
- Joined: Fri Jan 27, 2023 7:42 am
Re: calculations running too slow on our CDAC Paramshavak system but running fine in my dell laptop
This is the makefile.include for our installation.
Code: Select all
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxGNU\" \
-DMPI -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Dfock_dblbuf
CPP = gcc -E -C -w $*$(FUFFIX) >$*$(SUFFIX) $(CPP_OPTIONS)
FC = mpif90
FCL = mpif90
FREE = -ffree-form -ffree-line-length-none
FFLAGS = -w -ffpe-summary=none
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = gcc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB = linpack_double.o
# For the parser library
CXX_PARS = g++
LLIBS = -lstdc++
##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -march=native
FFLAGS += $(VASP_TARGET_CPU)
# For gcc-10 and higher (comment out for older versions)
FFLAGS += -fallow-argument-mismatch
# BLAS and LAPACK (mandatory)
LIBDIR = /usr/lib/x86_64-linux-gnu
BLAS = -L$(LIBDIR) -lblas
LAPACK = -L$(LIBDIR) -ltmglib -llapack
BLACS =
BLASPACK = -L$(LIBDIR) -lopenblas
SCALAPACK = -L$(LIBDIR) -lscalapack-openmpi $(BLACS)
LLIBS += $(SCALAPACK) $(LAPACK) $(BLAS) $(BLASPACK)
# FFTW (mandatory)
FFTW_ROOT = /home/user1/Desktop/Priyanka/software/vasp/fftw/fftw-3.3.10
LLIBS += -L$(FFTW_ROOT)/lib -lfftw3
INCS += -I$(FFTW_ROOT)/include
MPI_INC = /usr/lib/x86_64-linux-gnu/openmpi/include
# HDF5-support (optional but strongly recommended)
CPP_OPTIONS+= -DVASP_HDF5
HDF5_ROOT ?= /usr/lib/x86_64-linux-gnu/hdf5/openmpi
LLIBS += -L$(HDF5_ROOT)/lib -lhdf5_fortran
INCS += -I$(HDF5_ROOT)/include
# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS += -L$(WANNIER90_ROOT)/lib -lwannier
-
- Administrator
- Posts: 285
- Joined: Mon Sep 24, 2018 9:39 am
Re: calculations running too slow on our CDAC Paramshavak system but running fine in my dell laptop
Dear fakir_chand1,
Would you provide us with some example for a fast and slow job?
We are interested mostly in input files (INCAR, POTCAR, POSCAR, KPOINTS, batch script) and output files (stdout, stderror, OUTCAR, OSZICAR).
Also, are you running on one or more compute nodes? Are there any compiler errors popping up when compiling vasp on CDAC?
-
- Global Moderator
- Posts: 153
- Joined: Thu Nov 03, 2022 1:03 pm
Re: calculations running too slow on our CDAC Paramshavak system but running fine in my dell laptop
Dear fakir_chand1,
I see in your makefile.include that you do not have OMP support activated. In your submission scripts, or even when running on your laptop, do you run
Code: Select all
$ export OMP_NUM_THREADS=1
before calling VASP? This can significantly improve performance, as VASP won't be using OMP threads.
Kind regards,
Pedro