Try again. Fail better.

Structured Sparsity in Deep Neural Networks

April 18, 2018 | 8 Minute Read

High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1 and 3.1 speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25% to 92.60%, which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by  1%.

1 Introduction

Deep neural networks (DNN), especially deep Convolutional Neural Networks (CNN), made remarkable success in visual tasks [1][2][3][4][5] by leveraging large-scale networks learning from a huge volume of data. Deployment of such big models, however, is computation-intensive. To reduce computation, many studies are performed to compress the scale of DNN, including sparsity regularization [6], connection pruning [7][8] and low rank approximation [9][10][11][12][13]. Sparsity regularization and connection pruning, however, often produce non-structured random connectivity and thus, irregular memory access that adversely impacts practical acceleration in hardware platforms. Figure 1 depicts practical layer-wise speedup of AlexNet, which is non-structurally sparsified by `1-norm. Compared to original model, the accuracy loss of the sparsified model is controlled within 2%. Because of the poor data locality associated with the scattered weight distribution, the achieved speedups are either very limited or negative even the actual sparsity is high, say, >95%. We define sparsity as the ratio of zeros in this paper. In recently proposed low rank approximation approaches, the DNN is trained first and then each trained weight tensor is decomposed and approximated by a product of smaller factors. Finally, fine-tuning is performed to restore the model accuracy. Low rank approximation is able to achieve practical speedups because it coordinates model parameters in dense matrixes and avoids the locality problem of non-structured sparsity regularization. However, low rank approximation can only obtain the compact structure within each layer, and the structures of the layers are fixed during fine-tuning such that costly reiterations of decomposing and fine-tuning are required to find an optimal weight approximation for performance speedup and accuracy retaining. Inspired by the facts that (1) there is redundancy across filters and channels [11]; (2) shapes of filters are usually fixed as cuboid but enabling arbitrary shapes can potentially eliminate unnecessary computation imposed by this fixation; and (3) depth of the network is critical for classification but deeper layers cannot always guarantee a lower error because of the exploding gradients and degradation problem [5], we propose Structured Sparsity Learning (SSL) method to directly learn a compressed structure of deep CNNs by group Lasso regularization during the training. SSL is a generic regularization to adaptively adjust multiple structures in DNN, including structures of filters, channels, filter shapes within each layer, and structure of depth beyond the layers. SSL combines structure regularization (on DNN for classification accuracy) with locality optimization (on memory access for computation efficiency), offering not only well-regularized big models with improved accuracy but greatly accelerated computation (e.g., 5.1 on CPU and 3.1 on GPU for AlexNet).Our source code can be found at ttps://github.com/wenwei202/caffe/tree/scnn.

Connection pruning and weight sparsifying. Han et al. [7][8] reduced parameters of AlexNet and VGG-16 using connection pruning. Since most reduction is achieved on fully-connected layers, no practical speedups of convolutional layers are observed for the similar issue shown in Figure 1. However, convolution is more costly and many new DNNs use fewer fully-connected layers, e.g., only 3.99% parameters of ResNet-152 [5] are from fully-connected layers, compression and acceleration on convolutional layers become essential. Liu et al. [6] achieved >90% sparsity of convolutional layers in AlexNet with 2% accuracy loss, and bypassed the issue of Figure 1 by hardcoding the sparse weights into program. In this work, we also focus on convolutional layers. Compared to the previous techniques, our method coordinates sparse weights in adjacent memory space and achieve higher speedups. Note that hardware and program optimizations based on our method can further boost the system performance which is not covered in this paper due to space limit. Low rank approximation. Denil et al. [9] predicted 95% parameters in a DNN by exploiting the redundancy across filters and channels. Inspired by it, Jaderberg et al. [11] achieved 4.5 speedup on CPUs for scene text character recognition and Denton et al. [10] achieved 2 speedups for the first two layers in a larger DNN. Both of the works used Low Rank Approximation (LRA) with 1% accuracy drop. [13][12] improved and extended LRA to larger DNNs. However, the network structure compressed by LRA is fixed; reiterations of decomposing, training/fine-tuning, and cross-validating are still needed to find an optimal structure for accuracy and speed trade-off. As the number of hyper-parameters in LRA method increases linearly with the layer depth [10][13], the search space increases linearly or even exponentially. Comparing to LRA, our contributions are: (1) SSL can dynamically optimize the compactness of DNNs with only one hyper-parameter and no reiterations; (2) besides the redundancy within the layers, SSL also exploits the necessity of deep layers and reduce them; (3) DNN filters regularized by SSL have lower rank approximation, so it can work together with LRA for more efficient model compression.

Model structure learning. Group Lasso [14] is an efficient regularization to learn sparse structures. Liu et al. [6] utilized group Lasso to constrain the structure scale of LRA. To adapt DNN structure to different databases, Feng et al. [16] learned the appropriate number of filters in DNN. Different from prior arts, we apply group Lasso to regularize multiple DNN structures (filters, channels, filter shapes, and layer depth). A most related parallel work is Group-wise Brain Damage [17], which is a subset (i.e., learning filter shapes) of our work and further justifies the effectiveness of our techniques.

(SSL) for DNNs. Figure 2: The proposed Structured Sparsity Learning (SSL) for DNNs. The weights in filters are split into multiple groups. Through group Lasso regularization, a more compact DNN is obtained by removing some groups. The figure illustrates the filter-wise, channel-wise, shape-wise, and depth-wise structured sparsity that are explored in the work.

3 Structured Sparsity Learning Method for DNNs

We focus mainly on the Structured Sparsity Learning (SSL) on convolutional layers to regularize the structure of DNNs. We first propose a generic method to regularize structures of DNN in Section 3.1, and then specify the method to structures of filters, channels, filter shapes and depth in Section 3.2. Variants of formulations are also discussed from computational efficiency viewpoint in Section 3.3.

3.1 Proposed structured sparsity learning for generic structures

Suppose the weights of convolutional layers in a DNN form a sequence of 4-D tensors W(l) 2 RNlClMlKl , where Nl, Cl, Ml and Kl are the dimensions of the l-th (1  l  L) weight tensor along the axes of filter, channel, spatial height and spatial width, respectively. L denotes the number of convolutional layers. Then the proposed generic optimization target of a DNN with structured sparsity regularization can be formulated as: equation1 Here W represents the collection of all weights in the DNN; ED(W) is the loss on data; R() is non-structured regularization applying on every weight, e.g., `2-norm; and Rg() is the structured sparsity regularization on each layer. Because group Lasso can effectively zero out all weights in some groups [14][15], we adopt it in our SSL. The regularization of group Lasso on a set of weights w can be represented as Rg(w) = PG g=1 jjw(g)jjg, where w(g) is a group of partial weights in w and G is the total number of groups. Different groups may overlap. Here jj  jjg is the group Lasso, or jjw(g)jjg = rPjw(g)j i=1 w(g) i 2 , where jw(g)j is the number of weights in w(g).

3.2 Structured sparsity learning for structures of filters, channels, filter shapes and depth

In SSL, the learned “structure” is decided by the way of splitting groups of w(g). We investigate and formulate the filer-wise, channel-wise, shape-wise, and depth-wise structured sparsity in Figure 2. For simplicity, the R() term of Eq. (1) is omitted in the following formulation expressions. Penalizing unimportant filers and channels. SupposeW(l) nl;:;:;: is the nl-th filter andW(l) :;cl;:;: is the cl-th channel of all filters in the l-th layer. The optimization target of learning the filter-wise and channel-wise structured sparsity can be defined as