计算机与现代化 ›› 2024, Vol. 0 ›› Issue (06): 76-82.doi: 10.3969/j.issn.1006-2475.2024.06.013

• 图像处理 • 上一篇    下一篇

基于改进U-Net多特征融合的血管分割#br#

  

  1. (华中师范大学物理科学与技术学院,湖北 武汉 430079)
  • 出版日期:2024-06-30 发布日期:2024-07-17
  • 作者简介:作者简介:符灵利(1996—),女,河南信阳人,硕士研究生,研究方向:图像处理,深度学习,E-mail: 1312774516@qq.com; 邱宇(1996—),女,湖北黄冈人,硕士研究生,研究方向:图像处理,E-mail: 1543746126@qq.com; 通信作者:张新晨(1977—),男,湖北武汉人,副教授,硕士生导师,博士,研究方向:图像处理,3D目标检测,E-mail: zhangxc@mail.ccnu.edu.cn。
  • 基金资助:
    中央高校基本科研业务费资助项目(CCNV22KZ01,CCNV20TS010);湖北省重点研发计划项目(2022BAA080)
        

Retinal Vessel Segmentation Based on Improved U-Net with Multi-feature Fusion



  1. (College of Physical Science and Technology, Central China Normal University,Wuhan 430079,China)
  • Online:2024-06-30 Published:2024-07-17

摘要:
摘要:由于血管结构分布不均,粗细程度不一致,血管边界对比度较差等一些问题导致图像分割效果不佳,无法满足实际临床辅助的需求。为了解决细小血管在分割时出现断裂,小血管和低对比度血管分割效果差问题,本文在U-Net基础上,首先,在下采样过程中加入CA模块;其次,针对原模型对特征融合不充分的问题,在模型中引入Res2NetBlock模块;最后,在模型底层加入级联空洞卷积模块,来增强模型的感受野,使网络具有更多的空间尺度的信息,且能够加强上下文特征感知能力,使分割任务有着更好的性能。通过在DRIVE、CHASEDB1及自制Dataset100数据集上实验显示,准确率分别为96.90%, 97.83%和94.24%;AUC分别为98.84%,98.98%,97.41%。与U-Net等主流方法进行对比实验表明,灵敏度、准确率等指标均有所提升,表明了本文的血管分割方法具备捕获复杂特征的能力,具有更高的优越性。





关键词: 关键词:眼底数据扩充, 血管分割, 改进U-Net, 注意力机制, 特征融合, 分割算法

Abstract: Abstract: Due to some problems such as uneven distribution of blood vessel structure, inconsistent thickness, and poor contrast of blood vessel boundary, the image segmentation effect is not good, which cannot meet the needs of practical clinical assistance. To address the problem of breakage of small vessels and poor segmentation of low-contrast vessels, a CA module was integrated into the down-sampling process based on U-Net. Additiondly, to solve the problem of insufficient feature fusion in the original model, Res2NetBlock module was introduced into the model. Finally, a cascade void convolution module is added at the bottom of the model to enhance the receptive field, thereby improving the network’s spatial scale information and the contextual feature perception ability. So the segmentation task achieves better performance. Experiments on DRIVE, CHASEDB1 and self-made Dataset100 datasets show that the accuracy rates are 96.90%, 97.83% and 94.24%, respectively. The AUC is 98.84%, 98.98%, and 97.41%. Compared with U-Net and other mainstream methods, the sensitivity and accuracy are improved, indicating that the vessel segmentation method in this paper has the ability to capture complex features and has higher superiority.

Key words: Key words: fundus data augmentation, vascular segmentation, improved U-Net, attention mechanism, feature fusion, segmentation algorithm

中图分类号: