ABINIT 9.8.2 on Discoverer HPC
ABINIT is a powerful package for calculating a variety of optical, mechanical, vibrational, and other observable properties of materials. It is written in Fortran 95 and supports distributed parallelization model based on MPI library.
This post is about how ABINIT 9.8.2 code can be compiled for high productivity on Discoverer HPC.
The compilation of ABINIT code relies on the presence of a number of external auxiliary libraries, which have to be installed on the build host in advance. Most important are those that provide subroutines for linear algebra methods, Fast Fourier Transform, and storage container formats (HDF5, NetCDF4). You may add Libxc to that set of “must have” libraries, which is an electronic structure library of exchange-correlation functionals.
The compilation process may be organised to follow two different approaches, each one based on different perception regarding the location and versions of the auxiliary libraries: (i) a dedicated separate bundle with fixed versions that merely exists to support the compilation and execution ABINIT, or (ii) general purpose shared libraries already installed in the public software repository of Discoverer HPC. We decided to follow the latter due to the specific compilation of some of the auxiliary libraries and their affinity to ABINIT code. The corresponding recipe for compiling the code is publicly available at:
https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/abinit/9/9.8.2
The file abinit-9.8.2-on-discoverer.ac
contains the specific options to be followed during the compilation of the ABINIT code, while abinit-9.8.2.recipe
provides the actual shell commands to execute (invoking cmake, compilers, configuration scripts).
Some of the libraries (HDF5, NetCDF4, FFTW3) require linking against MPI library. Open MPI 4.1.4 (built by using Intel oneAPI classic compilers) is employed as such. Note that the serial part of the code (both in ABINIT and auxiliary libraries) is compiled by using Intel LLVM compilers.
- HDF5
HDF5 brings storage back-end to the code. For ABINIT 9.8.2 we employed HDF5 1.13.2 (version 1.14.0 is already available but it is very fresh and needs some extended examination first). Adding SZIP compression filter by linking the HDF5 code against Libaec can help with handling large data sets.
- NetCDF4 (C and Fortran)
NetCDF4 is high-end storage interface built on the top of HDF5 back-end. NetCDF C 4.9.0 and NetCDF Fortran 4.6.0 work as expected with HDF5 1.13.2. The SZIP filter as also available to NetCDF4 (through the linked HDF5 library).
- Linear algebra
One may consider linking against Intel oneMKL a natural choice here, but we have already experienced some severe problems with MKL on AMD EPYC 7H12 64-Core Processor (plenty of “Segmentation fault” errors). We hope Intel will be able to fix those kind of problems for the next version of oneMKL (then we will try to bring MKL back here). Each one of BLAS, LAPACK, and OpenBLAS should be compiled separately and installed into the same bundle location that already contains HDF5 and NetCDF4. Do compile OpenBLAS without enabling OpenMP. Unless MPI parallel execution of linear algebra methods is necessary, the MPI-based ScaLAPACK library should not participate in the compilation of ABINIT.
- FFTW3
Since MKL is not included in neither compilation or execution of ABINIT, FFTW3 is brought to the bundle as a stand-alone library (the usual kind of optimizations are applied).
- Libxc
Computing the third derivative should be enabled as a compile-time option. That is the only difference between this build and the one already available in the public software repository of Discoverer HPC.
On the file system of Discoverer HPC, the bundle with auxiliary libraries lives at:
/opt/software/bundles/abinit/9/9.8.2-intel-openmpi
To compile ABINIT 9.8.2 code against that bundle, the corresponding *.ac configuration file should point the corresponding variables to that location. You can check that by previewing the configuration file we used for building ABINIT executables.
To run ABINIT 9.8.2 of Discoverer HPC compute nodes, follow the documentation: