计算机与现代化 ›› 2022, Vol. 0 ›› Issue (08): 121-126.

• 图像处理 • 上一篇    

基于多级Transformer的超大倍率重建网络:参考图像超分辨率

  

  1. (华北电力大学控制与计算机工程学院,北京102206)
  • 出版日期:2022-08-22 发布日期:2022-08-22
  • 作者简介:陈彤(1993—),男,安徽马鞍山人,硕士研究生,研究方向:计算机视觉,E-mail: flybiubiu@gmail.com; 周登文(1965—),男,湖北黄梅人,教授,研究方向:图像处理,计算机视觉。

Multistage-transformer Large-factor Network: Reference-based Super-resolution

  1. (School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China)
  • Online:2022-08-22 Published:2022-08-22

摘要: 超分辨率(SR)是指从一个低分辨率图像,重建其对应的高分辨率副本。针对SR在超大倍率(8×、16×)重建不够精确的问题,本文提出多级Transformer的超大倍率重建网络(MTLF)。MTLF对多个Transformer进行多级堆叠以处理不同倍率的特征,并且利用修正注意力模块改进由Transformer得到的注意力权重,从而合成更精细的纹理。最后将所有倍率的特征融合成超大尺度下的SR图像。实验结果表明MTLF优于目前最好的方法(包括单图像超分辨率和基于Ref的超分辨率方法)。特别地,MTLF在极限倍率(32×)下也取得不错的效果。

关键词: 参考图像, 超分辨率, Transformer, 超大倍率, 注意力

Abstract: Image super-resolution (SR) refers to reconstructing the corresponding high-resolution copy from a low-resolution image. Aiming at solving the problem of inaccurate reconstruction of SR in the cases of super-large magnification (8×, 16×), a multi-level transformer super-magnification reconstruction network is proposed (MTLF). MTLF performs multi-level stacking of multiple transformers to process features of different scales, and uses the attention weights, which are obtained by the transformer and then improved by the modified attention module, to synthesize finer textures. In the end, the features of all magnifications fuse into a super-large-scale SR image. Experiments resenlts show that MTLF is superior to the state-of-the-art methods (including single-image super-resolution and Ref-based super-resolution methods) in terms of peak signal-to-noise ratio and visual effects. In particular, MTLF achieves fairly good results in the ultimate magnification (32×) scenario.

Key words: reference-based, super-resolution, transformer, large-factor, attention