[1] |
PRASOON A, PETERSEN K, IGEL C, et al. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network[C]// 2013 International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, 2013:246-253.
|
[2] |
SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
|
[3] |
KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C]// 2012 Advances in Neural Information Processing Systems. 2012:1097-1105.
|
[4] |
SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. 2015:1-9.
|
[5] |
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016:770-778.
|
[6] |
HAN S, MAO H Z, DALLY W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding[J]. arXiv preprint arXiv:1510.00149, 2015.
|
[7] |
LUO J H, WU J X, LIN W Y. Thinet: A filter level pruning method for deep neural network compression[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. 2017:5068-5076.
|
[8] |
HE W H, ZHANG X Y, YIN F, et al. Deep direct regression for multi-oriented scene text detection[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. 2017:745-753.
|
[9] |
LIU Z, LI J G, SHEN Z Q, et al. Learning efficient convolutional networks through network slimming[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. 2017:2755-2763.
|
[10] |
HUANG Z H, WANG N Y. Data-driven sparse structure selection for deep neural networks[C]// Proceedings of the 2018 European Conference on Computer Vision (ECCV). 2018:317-334.
|
[11] |
LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient convNets[J]. arXiv preprint arXiv:1608.08710, 2016.
|
[12] |
HE Y, DONG X Y, KANG G L, et al. Asymptotic soft filter pruning for deep convolutional neural networks[J]. IEEE Transactions on Cybernetics, 2020,50(8):3594-3604.
|
[13] |
LECUN Y, DENKER J S, SOLLA S A. Optimal brain damage[C]// 1990 Advances in Neural Information Processing Systems. 1990:598-605.
|
[14] |
LIU Z, SUN M J, ZHOU T H, et al. Rethinking the value of network pruning[J]. arXiv preprint arXiv:1810.05270, 2018.
|
[15] |
LEE N, AJANTHAN T, TORR P H S. SNIP: Single-shot network pruning based on connection sensitivity[J]. arXiv preprint arXiv:1810.02340, 2018.
|
[16] |
YU K C, SCIUTO C, JAGGI M, et al. Evaluating the search phase of neural architecture search[J]. arXiv:1902.08142, 2019.
|
[17] |
FRANKLE J, CARBIN M. The lottery ticket hypothesis: Finding sparse, trainable neural networks[J]. arXiv:1803.03635, 2018.
|
[18] |
ZHOU H, LAN J, LIU R, et al. Deconstructing lottery tickets: Zeros, signs, and the supermask[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019:3597-3607.
|
[19] |
FRANKLE J, DZIUGAITE G K, ROY D M, et al. Stabilizing the lottery ticket hypothesis[J]. arXiv preprint arXiv:1903.01611, 2019.
|
[20] |
GALE T, ELSEN E, HOOKER S. The state of sparsity in deep neural networks[J]. arXiv preprint arXiv:1902.09574, 2019.
|
[21] |
YOU Y, GITMAN I, GINSBURG B. Large batch training of convolutional networks[J]. arXiv preprint arXiv:1708.03888, 2017.
|
[22] |
GOYAL P, DOLLR P, GIRSHICK R, et al. Accurate, large minibatch SGD: Training imageNet in 1 hour[J]. arXiv preprint arXiv:1706.02677, 2017.
|
[23] |
LE T H N, QUACH K G, ZHU C C, et al. Robust hand detection and classification in vehicles and in the wild[C]// 2017 CVPR Workshops. 2017:1203-1210.
|
[24] |
THOMPSON J A F, SCHNWIESNER M, BENGIO Y, et al. How transferable are features in convolutional neural network acoustic models across languages?[C]// 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2019:2827-2831.
|
[25] |
BELKIN M, HSU D, MA S Y, et al. Reconciling modern machine-learning practice and the classical bias-variance trade-off[J]. Proceedings of the National Academy of Sciences of the United States of Awerica, 2019,116(32):15849-15854.
|