Abstract: HPC centers are facing increasing demand for greater software flexibility to support faster and more diverse innovation in computational scientific work. Containers, which use Linux kernel features to allow a user to substitute their own software stack for that installed on the host, are an increasingly popular method to provide this flexibility. Because standard container technologies such as Docker are unsuitable for HPC, three HPC-specific technologies have emerged: Charliecloud, Shifter, and Singularity.
A common concern is that containers may introduce performance overhead. To our knowledge, no comprehensive, rigorous, HPC-focused assessment of container performance has previously been performed. Our present experiment compares the performance of all three HPC container implementations and bare metal on multiple dimensions using industry-standard benchmarks (SysBench, STREAM, and HPCG).
We found no meaningful performance differences between the four environments, with the possible exception of modest variation in memory usage.
These results suggest that HPC users should feel free to containerize their applications without concern about performance degradation, regardless of the container technology used. It is an encouraging development towards greater adoption of user-defined software stacks to increase the flexibility of HPC systems.