计算机与现代化 ›› 2023, Vol. 0 ›› Issue (10): 45-52.doi: 10.3969/j.issn.1006-2475.2023.10.007

• 图像处理 • 上一篇    下一篇

基于空间注意力残差网络的图像超分辨率重建模型

  

  1. (大连民族大学信息与通信工程学院,辽宁 大连 116600)
  • 出版日期:2023-10-26 发布日期:2023-10-26
  • 作者简介:邢世帅(1996—),男,河南商丘人,硕士研究生,研究方向:图像超分辨率,E-mail: 1795861232@qq.com; 通信作者:刘丹凤(1987—),女,辽宁大连人,讲师,博士,研究方向:遥感图像处理,机器视觉,E-mail: liudanfeng@dlnu.edu.cn; 王立国(1974—),男,黑龙江哈尔滨人,教授,博士,研究方向:遥感高光谱图像处理,机器学习,E-mail: wangliguo@hrbeu.edu.cn; 潘月涛(1996—),男,山东潍坊人,硕士研究生,研究方向:遥感图像处理,E-mail: panyuetao@dlnu.edu.cn; 孟灵鸿(1997—),男,山东济宁人,硕士研究生,研究方向:遥感图像处理,E-mail: 2062452091@qq.com; 岳晓晗(1997—),女,山东潍坊人,硕士研究生,研究方向:图像超分辨率处理,E-mail: 2395149089@qq.com。
  • 基金资助:
    国家自然科学基金资助项目(62071084)

Image Super-resolution Reconstruction Based on Spatial Attention Residual Network

  1. (College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, China)
  • Online:2023-10-26 Published:2023-10-26

摘要: 卷积神经网络中的层次特征可以为图像重建提供重要信息。然而,现有的一些图像超分辨率重建方法没有充分利用卷积网络中的层次特征。针对该问题,本文提出一种基于空间注意力残差网络的模型(Residual Network Based on Spatial Attention, SARN)。具体来说,首先设计一种空间注意力残差模块(Spatial Attention Residual Block, SARB),将增强型空间注意力模块(Enhanced Spatial Attention, ESA)融入残差模块中,网络可以获得更有效的高频信息;其次融入特征融合机制,将网络各层获得的特征进行融合,提高网络中层次特征的利用率;最后,将融合后特征输入重建网络,得到最终的重建图像。实验结果表明,该模型无论在客观指标上,还是主观视觉效果上均优于对比算法,这说明本文提出的模型可以有效地利用图像中的层次特征,从而获得较好的超分辨率重建效果。

关键词: 关键词:超分辨率重建, 空间注意力, 残差网络, 特征融合机制, 层次特征

Abstract:  Hierarchical features extracted from convolutional neural networks contain affluent semantic information and they are crucial for image reconstruction. However, some existing image super-resolution reconstruction methods are incapable of excavating detailed enough hierarchical features in convolutional network. Therefore, we propose a model termed spatial attention residual network (SARN) to relieve this issue. Specifically, we design a spatial attention residual block (SARB), the enhanced spatial attention (ESA) is embedded into SARB to obtain more effective high-frequency information. Secondly, feature fusion mechanism is introduced to fuse the features derived from each layer. Thereby, the network can extract more detailed hierarchical features. Finally, these fused features are fed into the reconstruction network to produce the final reconstruction image. Experimental results demonstrate that our proposed model outperforms the other algorithms in terms of quantitative evaluation and visual comparisons. That indicates our model can effectively utilize the hierarchical features contained in the image, thus achieve a better super-resolution reconstruction performance.

Key words: Key words: super-resolution reconstruction, spatial attention, residual network, feature fusion mechanism, hierarchical features

中图分类号: