Computer and Modernization ›› 2024, Vol. 0 ›› Issue (01): 80-86.doi: 10.3969/j.issn.1006-2475.2024.01.013
Previous Articles Next Articles
Online:
2024-01-23
Published:
2024-02-26
CLC Number:
MENG Na, FANG Wei-wei, LU Hong-ying. A DNN Compression Method for Environmental Sound Classification on Microcontroller Unit[J]. Computer and Modernization, 2024, 0(01): 80-86.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.c-a-m.org.cn/EN/10.3969/j.issn.1006-2475.2024.01.013
[1] | NANNI L, MAGUOLO G, BRAHNAM S, et al. An ensemble of convolutional neural networks for audio classification[J]. Applied Sciences, 2021,11(13). DOI: 10.3390/ |
app11135796. | |
[2] | DEMIR F, TURKOGLU M, ASLAN M, et al. A new pyramidal concatenated CNN approach for environmental sound classification[J]. Applied Acoustics, 2020,170. DOI:10. |
1016/j.apacoust.2020.107520. | |
[3] | DAVIS N, SURESH K. Environmental sound classification using deep convolutional neural networks and data augmentation[C]// 2018 IEEE Recent Advances in Intelligent Computational Systems (RAICS). 2018:41-45. |
[4] | NORDBY J. Environmental sound classification on microcontrollers using convolutional neural networks[D]. Norwegian University of Life Sciences, 2019. |
[5] | SZE V, CHEN Y H, YANG T J, et al. Efficient processing of deep neural networks: A tutorial and survey[J]. Proceedings of the IEEE, 2017,105(12):2295-2329. |
[6] | SHARMA J, GRANMO O C, GOODWIN M. Environment sound classification using multiple feature channels and attention based deep convolutional neural network[C]// Interspeech 2020. 2020:1186-1190. |
[7] | ABDOLI S, CARDINAL P, KOERICH A L. End-to-end environmental sound classification using a 1D convolutional neural network[J]. Expert Systems with Applications, 2019,136:252-263. |
[8] | PALANISAMY K, SINGHANIA D, YAO A. Rethinking CNN models for audio classification[J]. arXiv preprint arXiv:2007.11154, 2020. |
[9] | LIN J, CHEN W M, LIN Y J, et al. Mcunet: Tiny deep learning on IoT devices[J]. Advances in Neural Information Processing Systems, 2020,33:11711-11722. |
[10] | DOYU H, MORABITO R, HOLLER J. Bringing machine learning to the deepest IoT edge with TinyML as-a-service[J]. IEEE IoT Newsletter, 2020,11:1-3. |
[11] | MOHAIMENUZZAMAN M, BERGMEIR C, WEST I, et al. Environmental sound classification on the edge: A pipeline for deep acoustic networks on extremely resource-constrained devices[J]. Pattern Recognition, 2023,133. DOI: 10.1016/j.patcog.2022.109025. |
[12] | KUMARI S, ROY D, CARTWRIGHT M, et al. EdgeL3: Compressing L3-Net for mote scale urban noise monitoring[C]// 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 2019:877-884. |
[13] | CERUTTI G, PRASAD R, BRUTTI A, et al. Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms[J]. IEEE Journal of Selected Topics in Signal Processing, 2020,14(4):654-664. |
[14] | SALAMON J, JACOBY C, BELLO J P. A dataset and taxonomy for urban sound research[C]// Proceedings of the 22nd ACM International Conference on Multimedia. 2014:1041-1044. |
[15] | PICZAK K J. ESC: Dataset for environmental sound classification[C]// Proceedings of the 23rd ACM International Conference on Multimedia. 2015:1015-1018. |
[16] | CHENG J, WANG P S, LI G, et al. Recent advances in efficient computation of deep convolutional neural networks[J]. Frontiers of Information Technology & Electronic Engineering, 2018,19(1):64-77. |
[17] | LOUIZOS C, WELLING M, KINGMA D P. Learning sparse neural networks through L_0 regularization[C]// International Conference on Learning Representations. 2018. DOI: 10.48550/arXiv.1712.01312. |
[18] | LIU N, MA X L, XU Z Y, et al. Autocompress: An automatic DNN structured pruning framework for ultra-high compression rates[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020,34(4):4876-4883. |
[19] | LIU M R, FANG W W, MA X D, et al. Channel pruning guided by spatial and channel attention for DNNs in intelligent edge computing[J]. Applied Soft Computing, 2021,110. DOI: 10.1016/j.asoc.2021.107636. |
[20] | DUGGAL R, XIAO C, VUDUC R, et al. Cup: Cluster pruning for compressing deep neural networks[C]// 2021 IEEE International Conference on Big Data (Big Data). 2021:5102-5106. |
[21] | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv:1503.02531, 2015. |
[22] | ROMERO A, BALLAS N, KAHOU S E, et al. Fitnets: Hints for thin deep nets[J]. arXiv preprint arXiv:1412.6550, 2015. |
[23] | ZAGORUYKO S, KOMODAKIS N. Paying more attention to attention:Improving the performance of convolutional neural networks via attention transfer[J]. arXiv preprint arXiv:1612.03928, 2017. |
[24] | WANG H, LOHIT S, JONES M, et al. Multi-head knowledge distillation for model compression[J]. arXiv preprint arXiv:2012.02911, 2020. |
[25] | YANG C G, AN Z L, CAI L H, et al. Hierarchical self-supervised augmented knowledge distillation[C]// Proceedings of the 30th International Joint Conference on Artificial Intelligence. 2021:1217-1223. |
[26] | CHEN D F, MEI J P, ZHANG Y, et al. Cross-layer distillation with semantic calibration[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021,35(8):7028-7036. |
[27] | PASSBAN P, WU Y M, REZAGHOLIZADEH M, et al. ALP-KD: Attention-based layer projection for knowledge distillation[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021,35(15):13657-13665. |
[28] | WU Y, REZAGHOLIZADEH M, GHADDAR A, et al. Universal-KD: Attention-based output-grounded intermediate layer knowledge distillation[C]// Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021:7649-7661. |
[29] | GHOLAMI A, KIM S, DONG Z, et al. A survey of quantization methods for efficient neural network inference[M]// Low-Power Computer Vision. Chapman and Hall/CRC. 2022:291-326. |
[30] | RASTEGARI M, ORDONEZ V, REDMON J, et al. Xnor-net: Imagenet classification using binary convolutional neural networks[C]// European Conference on Computer Vision. 2016:525-542. |
[31] | COURBARIAUX M, BENGIO Y, DAVID J P. Binaryconnect: Training deep neural networks with binary weights during propagations[J]. Advances in Neural Information Processing Systems, 2015,28:12-19. |
[32] | CHOI J, VENKATARAMANI S, SRINIVASAN V V, et al. Accurate and efficient 2-bit quantized neural networks[M]// Proceedings of Machine Learning and Systems. 2019:348-359. |
[33] | CHOUKROUN Y, KRAVCHIK E, YANG F, et al. Low-bit quantization of neural networks for efficient inference[C]// 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). 2019:3009-3018. |
[34] | BANNER R, NAHSHAN Y, SOUDRY D. Post training 4-bit quantization of convolutional networks for rapid-deployment[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019:7950-7958. |
[35] | JACOB B, KLIGYS S, CHEN B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018:2704-2713. |
[36] | LIANG T L, GLOSSNER J, WANG L, et al. Pruning and quantization for deep neural network acceleration: A survey[J]. Neurocomputing, 2021,461:370-403. |
[37] | DAVID R, DUKE J, JAIN A, et al. Tensorflow lite micro: Embedded machine learning for TinyML systems[J]. Proceedings of Machine Learning and Systems. 2021,3:800-811. |
[1] | CHEN Qi, LI Jing-jing. Computational Offloading Strategy Based on Multi-objective Optimization in D2D Network [J]. Computer and Modernization, 2024, 0(01): 21-28. |
[2] | YAN Yang, ZHAN Zi-jun, CAO Shao-hua. Collaborative Device-based Large-scale Offloading: A Bi-level Optimization Algorithm Fusing Divide-and-conquer and Greedy [J]. Computer and Modernization, 2023, 0(11): 13-21. |
[3] | HE Yu-peng, TAO Yong, WANG Bing-heng, ZHAO Ying-nan. Research Status and Prospect of Edge Computing in Smart Distribution Network [J]. Computer and Modernization, 2023, 0(08): 87-92. |
[4] | CHEN Gang, WANG Zhi-jian, XU Sheng-chao. Mobile Edge Computing Task Offloading Based on Feasible Point Tracking Continuous#br# Convex Approximation [J]. Computer and Modernization, 2023, 0(08): 93-97. |
[5] | CHEN Zhuo, QIAO Gui-fang, CHAI Xin-bo, DU Yi-jun, SHEN Chong-lin, WANG Yuan-hao. Multi-weather Vehicle Detection Algorithm Based on Modified Knowledge Distillation [J]. Computer and Modernization, 2023, 0(02): 50-57. |
[6] | ZHANG Hao, LU Hong-ying. Byzantine Fault-tolerant Distributed Consistency Algorithm for Edge Computing Applications [J]. Computer and Modernization, 2022, 0(12): 33-41. |
[7] | YAO Zheng. Privacy Protection Model for Medical Data Based on HISPAC [J]. Computer and Modernization, 2022, 0(09): 1-12. |
[8] | LI Yi, WEI Jian-guo, LIU Guan-wei. Survey of Model Pruning Algorithms [J]. Computer and Modernization, 2022, 0(09): 51-59. |
[9] | ZHAN Jun-wei, ZHUANG Yi. Mobile Edge Computing Task Offloading Model and Algorithm Based on Energy Consumption and Delay Optimization [J]. Computer and Modernization, 2022, 0(08): 86-93. |
[10] | ZHANG Yan-hu, YAN Li-juan, MA Zhi-fen, ZHANG Yan-jun. An improved Particle Swarm Optimization (PSO) Force Unloading Algorithm for Multi-task and Multi-resource Moving Edge Computing Environment [J]. Computer and Modernization, 2022, 0(05): 54-60. |
[11] | FENG Jing-xiang. Channel Pruning of Convolutional Neural Network Based on Transfer Learning [J]. Computer and Modernization, 2021, 0(12): 13-18. |
[12] | WU Jian-bo, ZHU Wen-xia, JU Liang, XU Zhi-fang. Application of Edge Computing in Intelligent Transportation Systems [J]. Computer and Modernization, 2021, 0(12): 103-109. |
[13] | WANG Shao-qiang, WANG Yu. New Data Processing System Based on Edge Computing [J]. Computer and Modernization, 2019, 0(07): 15-. |
[14] | HUANG Jian-li, DU Jin-ran, XIE Jia-quan, QIN Ke. An Outlier Detection Algorithm in Big Data Based on Improved KNN [J]. Computer and Modernization, 2017, 0(5): 67-70,75. |
[15] | WANG Huajun, LI Rong, XU Yanhua, MENG Dejian. A Face Recognition Algorithm Under Illumination Variation Based on #br# Local Phasetexture Representation [J]. Computer and Modernization, 2015, 0(12): 84-. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||