LIU Ziyang, JIA Huizhen, WANG Tonghan. No-reference Image Quality Assessment Based on DenseNet and Meta-learning[J]. Computer and Modernization, 2025, 0(12): 81-87.
[1] YI X, JIANG Q P, ZHOU W. No-reference quality assesment of underwater image enhancement[J]. Displays, 2024,81. DOI:10.1016/j.displa.2023.102586.
[2] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004,13(4):600-612.
[3] HU B, WANG S J, GAO X B, et al. Reduced-reference imagedeblurring quality assessment based on multi-scale feature enhancement and aggregation[J]. Neurocomputing, 2023,547. DOI:10.1016/j.neucom.2023.126378.
[4] QIN G Y, HU R Z, LIU Y T, et al. Data-efficient image quality assessment with attention-panel decoder[C]// Proceedings of the 37th AAAI Conference on Artificial Intelligence and 35th Conference on Innovative Applications of Artificial Intelligence and 13th Symposium on Educational Advances in Artificial Intelligence. AAAI, 2023:2091-2100.
[5] RAJEVENCELTHA J, GAIDHANE V H. A no-reference image quality assessment model based on neighborhood component analysis and Gaussian process[J]. Journal of Visual Communication and Image Representation, 2024,98. DOI: 10.1016/j.jvcir.2023.104041.
[6] ZHOU Z H, ZHOU Z N, TAO X Y, et al. EARNet: Error-aware reconstruction network for no-reference image quality assessment[J]. Expert Systems with Applications, 2024,238. DOI: 10.1016/j.eswa.2023.122050.
[7] SAAD M A, BOVIK A C, CHARRIER C. A DCT statistics-based blind image quality index[J]. IEEE Signal Processing Letters, 2010,17(6):583-586.
[8] MOORTHY A K, BOVIK A C. Blind image quality assessment: From natural scene statistics to perceptual quality[J]. IEEE Transactions on Image Processing, 2011,20(12):3350-3364.
[9] AVANAKI N J, GHILDYAL A, BARMAN N, et al. LAR-IQA: A lightweight, accurate, and robust no-reference image quality assessment model[J]. arXiv preprint arXiv:2408.17057, 2024.
[10] ADHIKARI A, LEE S W. AM-BQA: Enhancing blind image quality assessment using attention retractable features and multi-dimensional learning[J]. Image and Vision Computing, 2024,147. DOI: 10.1016/j.imavis.2024.105076.
[11] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[12] SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015:1-9.
[13] SUN W, MIN X, TU D Y, et al. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training[J]. IEEE Journal of Selected Topics in Signal Processing, 2023,17(6):1178-1192.
[14] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016:770-778.
[15] HUANG G, LIU Z, LAURENS V D M, et al. Densely connected convolutional networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017:2261-2269.
[16] KIM D, HEO B, HAN D. DenseNets reloaded: Paradigm shift beyond ResNets and ViTs[C]// Proceedings of the 2024 18th European Conference on Computer Vision. Springer, 2024:395-415.
[17] GOLESTANEH S A, DADSETAN S, KITANI K M. No-reference image quality assessment via Transformers, relative ranking, and self-consistency[C]// 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. IEEE, 2022:3989-3999.
[18] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv preprint arXiv:2010.11929, 2020.
[19] VASWANI A, SHAZEERN, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. ACM, 2017:6000-6010.
[20] LIU Z, LIN Y T, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[J]. arXiv preprint arXiv:2103.14030, 2021.
[21] ZHU H C, LI L D, WU J J, et al. Generalizable no-reference image quality assessment via deep meta-learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021,32(3):1048-1060.
[22] WEI L S, YAN Q Q, LIU W, et al. Perceptual quality assessment for no-reference image via optimization-based meta-learning[J]. Information Sciences, 2022,611:30-46.
[23] VANSCHOREN J. Meta-learning[M]// Automated Machine Learning: Methods, Systems, Challenges. Springer, 2019:35-61.
[24] ELSKEN T, STAFFLER B, METZEN J H, et al. Meta-Learning of neural architectures for few-shot learning[J]. 2019. DOI:10.1109/CVPR42600.2020.01238.
[25] ZHU H C, LI L D, WU J J, et al. MetaIQA: Deep meta-learning for no-reference image quality assessment[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020:14131-14140.
[26] YANG S D, WU T H, SHI S W, et al. MANIQA: Multi-dimension attention network for no-reference image quality assessment[J]. arXiv preprint arXiv: 2204.08958, 2022.
[27] FINN C . Learning to learn with gradients[R]. University of California, Berkeley, 2018.
[28] IOFFE S, SZEGEDY C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift[C]// Proceedings of the 32nd International Conference on International Conference on Machine Learning. ACM, 2015:448-456.
[29] GLOROT X, BORDES A, BENGIO Y. Deep sparse rectifier neural networks[C]// Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. PMLR, 2011:315-323.