HiRep 0.1
|
Make sure the build command Make/nj
and ninja
are in your PATH
.
Adjust the file Make/MkFlags
to set the desired options. The option file can be generated by using the Make/write_mkflags.pl
tools. Use
for a list of available options. The most important ones include:
Comment out the line here when you want to establish certain boundary conditions in the respective direction.
Available options are
Example for antiperiodic boundary conditions in the time direction and periodic boundary conditions in the spatial dimensions.
You can select a number of features via the MACRO
variable. The most important ones are:
Specify whether you want to compile with MPI by using
For compilation with GPU acceleration for CUDA GPUs enable GPU use and use the new geometry. If you try to compile with GPUs but forget to set the new geometry, the compilation will fail.
If you want to compile your code for AMD GPUs, additionally add the flag
enables even-odd preconditioning, so you never want to disable it.
suppresses debug output. If you delete this option, HiRep
will print a lot more unnecessary output.
This performs a check on the geometries of the spinors and is essential for debugging. In general, leaving it as a safety check does not hurt, but if you simulate with very small local lattices, you may want to disable it and check whether there is a performance improvement.
Prints to file immediately.
The setting
switches from sequential blocking communications to immediately returning calls. For multi-GPU jobs on unusual node and network topologies, blocking communications perform better, avoiding too many requests from piling up. However, for large jobs on a supercomputer, non-blocking communications are substantially faster. While the default option is to use blocking communications, performance tuning is possible by adding this flag.
While the new geometry is the only supported option for GPUs, the old geometry minimizes the number of copies necessary for the send buffer synchronization, so you may want to use the old geometry when compiling for CPUs. However, there is a dependence on the system, so it is worth testing to see which performs better in your production setting.
The old geometry is the default. When one wants to use the new geometry, compile with
For GPU setups, there is a kernel improvement that scales better for large gauge groups. When simulating SU(NG) with NG>5 on NVIDIA GPUs, try to enable
This option is also useful for all gauge groups when using AMD GPUs because the kernel is optimized to minimize register pressure.
For GPU setups, you can use hwloc
to make sure that the CPUs used to manage the GPUs on a node are located in the same NUMA domain. For this compile with
and dynamically link -lhwloc in the LDFLAGS.
To compile the code for your laptop, you only need to set the C compiler. For example
If you want support for parallelization, you need to include the MPI compiler wrapper
Another example: To use the Intel compiler and Intel's MPI implementation, and no CUDA, one could use:
With GPUs: you can set your choice of C, C++, MPI, and CUDA compiler and their options by using the variables:
For LUMI AMD GPUs, it seems to be favorable to use hipcc
For more information on configuring the code for AMD GPUs, see the user guide on the GitHub pages.
From the root folder just type:
(this is a tool in the Make/
folder: make sure it is in your path!) The above will compile the libhr.a
library and all the available executables in the HiRep distribution, including executables for dynamical fermions hmc
and pure gauge suN
simulations and all the applicable tests. If you wish to compile only one of the executables, e.g., suN
, just change to the corresponding directory, e.g., PureGauge
, and execute the nj
command from there.
All build artefacts, except the final executables, are located in the build
folder at the root directory of the distribution.