Abstract: Computing has always been part of experimental science, and is increasingly playing a central role in enabling scientific discovery. Technological advances are providing more powerful supercomputers and super-efficient specialized chip architectures. It is also advancing instrument technology, resulting in higher resolution detectors that produce orders of magnitude more data and allow access to exciting new scientific domains.
The scale of the compute needs of such instruments is typically mixed, sometimes requiring cloud or small cluster computing and sometimes requiring dedicated access to a supercomputer. There is a vital dependence on how quickly the compute power can be available and how quickly the data can be transferred to the compute site, as many experiments require real-time data analysis. Using real-life examples from high energy physics, microscopy and genomics, I will discuss how experimental science is taking advantage of both cloud and near-exascale HPC resources. I will outline some of the challenges we see in operating such workflows across multiple sites, with a focus on system responsiveness and data management issues.