DescriptionWhile scientific software is widely used in several science and engineering disciplines, developing accurate and reliable scientific software is notoriously difficult. One of the most serious difficulties comes from dealing with floating-point arithmetic to perform numerical computations. Round-off errors occur and accumulate at all levels of computation, and compiler optimizations and low precision arithmetic can significantly affect the final computational results. With accelerators dominating high-performance computing systems, computational scientists are faced with even bigger challenges, given that ensuring numerical reproducibility in these systems pose a very difficult problem.
This tutorial will demonstrate tools that are available today to analyze floating-point scientific software. We focus on tools that allow programmers to get insight about how different aspects of floating-point arithmetic affects their code and how to fix potential bugs. Some of the floating-point analysis areas that we cover in the tutorial are compiler optimizations, floating-point exceptions on GPUs, precision tuning, sensitivity to rounding errors, non-determinism, and data races.