Poster 135: High-Performance Deep Learning via a Single Building Block
TimeThursday, 21 November 20198:30am - 5pm
DescriptionDeep learning (DL) is one of the most prominent branches of machine learning. Due to the immense computational cost of DL workloads, industry and academia have developed DL libraries with highly-specialized kernels for each workload/architecture, leading to numerous, complex code-bases that strive for performance, yet they are hard to maintain and do not generalize. In this work, we introduce the batch-reduce-GEMM kernel and show how the most popular DL algorithms can be formulated with this kernel as basic building-block. Consequently, the DL library-development degenerates to mere (potentially automatic) tuning of loops around this sole optimized kernel. By exploiting our kernel we implement Recurrent Neural Networks, Convolution Neural Networks and Multilayer Perceptron training and inference primitives in just 3K lines of high-level-code. Our primitives outperform vendor-optimized libraries on multi-node CPU-Clusters. We also provide CNN kernels targeting GPUs. Finally, we demonstrate that batch-reduce-GEMM kernel within a tensor compiler yields high-performance CNN primitives.