Page 1 of 1

Bug in RPAR calculation when increasing ENCUT

Posted: Mon Jul 22, 2024 5:49 pm
by ngocdung_dinh
Hi everyone,

I try to calculate the correlation energy of my Nb doped TiO2 system using low-scaling RPA algorithm. I followed the instruction in VASP website (wiki/index.php/ACFDT/RPA_calculations) and proceed my calculation in 3 step: (1) DFT-scf, (2) exact diagonalization of the KS Hamiltonian with large number of unoccupied bands and (3) calculate RPA correlation energy. Everything worked fine with ENCUT set to 550 eV. However, when I increase ENCUT to 650 eV (and also increased NBANDS accordingly), I got the following error during the third step:

"internal error in: mpi.F at line: 2536
M_bcast_s: invalid vector size n -1539309568
If you are not a developer, you should not encounter this problem.
Please submit a bug report."

I tried to upload my input and output file but failed, and got this message: "Sorry, the board attachment quota has been reached.".
So, please find the detail of my calculation with both value of ENCUT in here:
https://drive.google.com/drive/folders/ ... sp=sharing.
I would be grateful if someone could help me resolve this problem.

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Tue Jul 23, 2024 1:34 pm
by christopher_sheldon1
Thank you for uploading them to Google drive, we had a little problem with old attachments taking up too much space but this should be resolved now.

This is due to the job running in parallel. The size of the MPI vector for WAVEDER that your calculation wants to submit is greater than single precision allows. So when WAVEDER is read, you get integer overlflow. We will make a note of it and see if it may be resolved but it seems that ENCUT = 650 eV is just too large at the present.

Your setup looks fine, generally. Is there a particular reason that you are using large energy cutoffs? Perhaps we can be of more use there.

Best,

Chris

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Tue Jul 23, 2024 5:13 pm
by ngocdung_dinh
Dear Chris,

Thank you for your response. I understand the issue now.
The reason I am using a large energy cutoff is to evaluate how the RPA correlation energy converges as the energy cutoff increases.

Best regards,
Dinh

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Wed Jul 24, 2024 7:40 am
by christopher_sheldon1
Hi Dinh,

Glad I could help. We do not recommend converging total energies, for broadly or for RPA as this is virtually impossible. Instead, looking at energy differences is recommended, as they converged much faster than total energies. This is largely due to the finite cutoffs cancelling out somewhat for similar systems. So long as your ENCUT is above ENMAX in the POTCAR file, this is generally a reasonable value to use, although it may differ for very large cells.

Is there a specific energy difference that you might want to converge with respect to? E.g. formation energies, adsorption energies.

Best.

Chris

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Wed Jul 24, 2024 5:09 pm
by ngocdung_dinh
Dear Chris,

Thank you for your reply.
I appreciate your guidance. My main concern is the formation energy, so I initially examined the total energy. Based on your advice, I will now focus on converging the formation energy instead. Do you have any specific recommendations for calculating formation energy using the RPA low-scaling algorithm in VASP?

Best regards,
Dinh

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Thu Jul 25, 2024 8:12 am
by christopher_sheldon1
Hi Dinh,

My main advice when it comes to low-scaling RPA is to be careful with your references. Always using the same ENCUT is fairly standard but you sometimes need to be careful with other settings, such as the formation energy reference, the amount of vacuum, and density of k-mesh.

For example, with adsorption energies, you can get "converged" adsorption energies by putting the gas phase reference in too small a cell but these are not accurate representations of RPA adsorption energies. For gas-phase reference, it is important to get as large an exact exchange (DFT@HF energies) as this decays slowly with respect to volume (V^-1), while the RPA correlation energy decays much more rapidly (V^-2). For small molecules, this can contribute a large proportion and so underestimate the RPA adsorption energy due to poor description on the HF level.

For formation energies, I imagine that this would be the bulk as a reference? When I have calculated RPA surface energies, I found it tricky with the bulk reference for platinum (primitive cell) to get an equivalent k-point mesh to the surface, as much higher k-point meshes are affordable for bulk systems. My advice there would be to try to keep the k-point density similar, to get the closest comparison, even if you can afford better for the cheaper system.

k_density = (N_k‧|b|)/2π

where N_k is the number of k-points and b is the reciprocal lattice vector.

Do you have particular references in mind for calculating the formation energy? E.g. bulk, gas phase, bare surface, bound surface

Best,

Chris

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Thu Jul 25, 2024 9:49 am
by ngocdung_dinh
Dear Chris,

Yes, I will use the bulk as the reference for my formation energy calculations. Your explanation was very helpful, and I now understand what I need to do to proceed with my calculations.

Thank you again for your valuable guidance.

Best regards,
Dinh

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Thu Jul 25, 2024 9:54 am
by christopher_sheldon1
Hi Dinh,

Glad it was helpful.

Best wishes,

Chris

Re: Bug in RPAR calculation when increasing ENCUT

Posted: Fri Oct 11, 2024 1:21 pm
by christopher_sheldon1

Hi Dinh,

I've just discussed this with a colleague and he recognised that the number of ranks and nodes that you use is very large. We presume that this is so that you have enough RAM for the RPA calculation. We expect that the crash is due to your use of heterogeneous nodes, which are not interconnected by InfiniBand. A fix could be to use homogeneous cores (i.e. all 4 cores or all 8 cores, not a mixture).

A few hints to improve the efficiency would be to reduce the number of MPI ranks and to only use 1 thread per core.

Best wishes,

Chris