2. Hardware Accelerator for Sparse Dense Matrix Multiplication
Team Members: Ashuthosh, Santosh, Srinivasan and Vishvas
Matrix Multiplication has gained importance due to its wide usage in Deep Neural Networks. The presence of sparsity in matrices needs special considerations to avoid redundancy in computations and memory accesses. Sparsity becomes relevant in the choice of compression format for storage and memory access. In addition to compression format, the choice of the algorithm also influences the performance of the matrix multiplier. The interplay of algorithm and compression formats results in significant variations in several performance parameters such as execution time, memory, and total energy consumed.
Our custom hardware accelerator for sparse-dense matrix multiplication shows a difference in speedup by 2X and a difference in energy consumption by about 1.8X.
We show that an intelligent choice of algorithm and compression format based on the variations in sparsity, matrix dimensions, and device specifications is necessary for performance acceleration. Our exploration tool for identifying the right mix-and-match is available on Github.