Scalable Infrastructure training module

Ready access to large scale resources greatly enhances the development and refinement of applications that require substantial processing, by shortening the development, optimization and validation latencies by orders of magnitude. The datasets in experimental HEP are large and complex, and they have similarly large processing demands, which will only increase in the future with the advent of the HL-LHC and DUNE, etc. This training module will familiarize participants with the tools for large scale distributed processing including diverse large scale computing resources, such as regional and global grids, HPCs and academic and commercial clouds, including new GPU workflows. Examples include the PanDA workload management system, the Intelligent Data Delivery Service (iDDS), etc. Several example applications will be provided that are related to the research projects proposed. The goal is to train data-intensive researchers with the skills to use large scale computing resources.

Coming soon….