Progressive Principle Component Analysis for Compressing Deep Convolutional Neural Networks
2021.06.14
[Publication Time] 2021.06.14
[Lead Author] Jing Zhou
[Corresponding Author] Qi Haobo
[Journal] Neurocomputing
[Abstract]
In this work, we propose a progressive principal component analysis (PPCA) method for compressing deep convolutional neural networks. The proposed method starts with a prespecified layer and progressively moves on to the final output layer. For each target layer, PPCA conducts kernel principal component analysis for the estimated kernel weights. This leads to a significant reduction in the number of kernels in the current layer. As a consequence, the channels used for the next layer are also reduced substantially. This is because the number of kernels used in the current layer determines the number of channels for the next layer. For convenience, we refer to this as a progressive effect. As a consequence, the entire model structure can be substantially compressed, and both the number of parameters and the inference costs can be substantially reduced. Meanwhile, the prediction accuracy remains very competitive with respect to that of the baseline model. The effectiveness of the proposed method is evaluated on a number of classical CNNs (AlexNet, VGGNet, ResNet and MobileNet) and benchmark datasets.
[Keywords]
CNN compression;model acceleration;progressive PCA;kernel-wise reduction