计算机与现代化 ›› 2025, Vol. 0 ›› Issue (10): 7-13.doi: 10.3969/j.issn.1006-2475.2025.10.002

• 图像处理 • 上一篇    下一篇

融合空间信息的YOLOv7交通标志检测

  


  1. (西安工程大学计算机科学学院,陕西 西安 710600) 
  • 出版日期:2025-10-27 发布日期:2025-10-27
  • 作者简介: 作者简介:师红宇(1981—),女,陕西渭南人,高级工程师,硕士,研究方向:图像处理,深度学习,智能检测,E-mail: shy510213@163.com; 张哲于(1998—),男,陕西延安人,硕士研究生,研究方向:图像处理,深度学习,目标检测,E-mail: zzy1195264354@163.com; 杜文(2000 —),女,江苏淮安人,硕士研究生,研究方向:深度学习,图像处理,人群密度估计,E-mail: duvin0512@163.com; 李怡(1977—),女,辽宁锦州人,副教授,博士,研究方向:人工智能,计算机检测,E-mail: 119586420@qq.com。
  • 基金资助:
    陕西省重点研发计划项目(2022GY-058, 2022GY-074)
      

Fusion of Spatial Information for YOLOv7 Traffic Sign Detection


  1. (School of Computer Science, Xi’an Polytechnic University, Xi’an 710600, China)
  • Online:2025-10-27 Published:2025-10-27

摘要: 摘要:交通标志在检测过程中,因受天气和光照强度的影响,导致检测时出现错检、漏检等问题,针对此问题提出一种融合空间信息的交通标志检测算法。首先,在网络中使用坐标卷积,增强网络对坐标位置信息的敏锐性。其次,在主干特征提取中加入坐标注意力机制,可以更好地关注融合处的空间位置信息。在特征融合部分使用多尺度加权融合网络和金字塔池化,利用加权计算和跳跃连接的方式,增强低层与高层之间的语义信息融合效果。最后,使用边框回归损失函数(Scalable Intersection over Union Loss, SIoU)提高目标定位的准确性。在CCTSDB2021和GTSDB数据集上的实验结果显示,该方法在2种数据集上的平均精度(mean Average Precision, mAP)分别达到84.9%和98.5%,与主流检测模型对比有显著提升,较原模型分别提升了5.39个百分点和1.67个百分点,提高了交通标志的检测精度。


关键词: 关键词:交通标志检测; 坐标卷积; 注意力机制;多尺度融合; SIoU损失函数
 ,

Abstract: Abstract: During the detection process of traffic signs, due to the influence of weather and light intensity, problems such as false detections and missed detections occur during detection. To solve this problem, a traffic sign detection algorithm combining spatial information is proposed. Firstly, coordinate convolution is used on network to enhance sensitivity of the network to coordinate position information. Additionally, the incorporation of a coordinate attention mechanism into the backbone features enables better focus on spatial location information at fusion points. Moreover, the feature fusion process utilizes a multi-scale weighted network and pyramid pooling, leveraging weighted calculations and skip connections to enhance semantic information fusion between low-level and high-level layers. Lastly, the adoption of the SIoU loss function enhances target positioning accuracy. The experimental results on the CCTSDB2021 and GTSDB datasets demonstrate that this method achieved mean Average Precision (mAP) values of 84.9% and 98.5% respectively. Compared with mainstream detection models, it shows significant improvement—exceeding the original model by 5.39 percentage points and 1.67 percentage points—thus enhancing the detection accuracy of traffic signs.

Key words: Key words: traffic sign detection, coordinate convolution, attention mechanism, multi-scale fusion, SIoU loss function

中图分类号: