Discoverer is LLVM-ready
We’ve been examining the deployment of LLVM compiler infrastructure in high-performance computing for more than a year. LLVM compilers provide novel and significantly more optimal and reliable schemas for the generation of binary code. More and more source code distributions come with LLVM-compatible CMake configurations. GROMACS is one of them. The Python universe of applications also actively adopts LLVM for HPC. Numba is one of those applications. It serves as a foundation for the development of bindings for the creation of Just-In-Time (JIT) compilers in Python.
No modern high-performance computing software can avoid the use of LLVM. To guarantee adequate LLVM compiler support for the projects that are utilizing Discoverer, we provide access to all major LLVM compiler infrastructure variants:
The installation of the vanilla LLVM compiler infrastructure is supported by our team. We download regularly the programming code of the project from its GitHub repository:
https://github.com/llvm/llvm-project
and compile it using build recipes developed and supported by us:
https://gitlab.discoverer.bg/vkolev/recipes/-/tree/main/llvm
The Intel oneAPI comes with Intel-developed LLVM compilers. The old-school Intel compilers are still in that package, but they won’t be part of the upcoming release of Intel oneAPI. We strongly urge all of our users to consider moving to the LLVM Intel compilers, in case they want to stick to the Intel oneAPI.
Next, we are moving towards enhanced LLVM support for Python. One approach to providing that assistance is to incentivize the users to utilise PyTorch and TensorFlow, which are included in the Intel oneAPI installation. Another approach is to develop and sustain our own optimised Python modules, constructed using LLVM, which are not included in the Intel Conda channel.