Presentation
On the Feasibility of Optical Circuit Switching for Distributed Deep Learning
SessionPHOTONICS: Photonics-Optics Technology Oriented Networking, Information, and Computing Systems
Presenter
Event Type
Workshop
W
Architectures
Datacenter
Emerging Technologies
Hardware
HPC
I/O
Networks
Photonics
Silicon Fabrication
TimeMonday, 18 November 201912:10pm - 12:30pm
Location710
DescriptionData parallelism is the dominant method used to training deep learning (DL) model on High-Performance Computing systems such as large-scale GPU clusters. In which, collective communication large message, e.g., up to hundreds of MB, between GPUs becomes one of the major bottlenecks. Especially when training a deep learning model on a large number of node, inter-node communication becomes bottle-neck due to its relatively higher latency and lower link bandwidth (than intra-node communication). To cope with this problem, some techniques have been proposed to (a) optimize the collective communication algorithms that take into account the network topology, (b) reduce the message size, and (c) overlap the communication and computation. All of these approaches target to deal with the large message size issue while diminishing the effect of the limitation of the inter-node network. In this study, we investigate the benefit of increasing inter-node link bandwidth by using the hybrid switching systems, i.e., Electrical Packet Switching and Optical Circuit Switching, We find that the typical data-transfer of synchronous data-parallelism training are long-live and rarely changed that can be speed-up with optical switching. Simulation results on Simgrid simulator show that our approach speed-up the training time of deep learning application.