计算机与现代化 ›› 2025, Vol. 0 ›› Issue (10): 20-24.doi: 10.3969/j.issn.1006-2475.2025.10.004

• 图像处理 • 上一篇    下一篇

基于知识蒸馏技术的电力视觉系统

  


  1. (南京南瑞信息通信科技有限公司,江苏 南京 210008)
  • 出版日期:2025-10-27 发布日期:2025-10-27
  • 作者简介: 作者简介:黄姗姗(1992—),女,江苏南京人,工程师,硕士,研究方向:计算机视觉,智能电网,E-mail: huangshanshan@sgepri.sgcc.com.cn; 罗旺(1980—),男,江苏南京人,高级工程师,博士,研究方向:机器学习,智能电网,E-mail: luowang@sgepri.sgcc.com.cn; 郝运河(1988—),男,江苏南京人,工程师,硕士,研究方向:机器学习,图像识别,E-mail: haoyunhe@sgepri.sgcc.com.cn。
  • 基金资助:
    国家电网有限公司总部科技项目(5700-202340675A-3-3-JC)

Power Vision System Based on Knowledge Distillation Technology


  1. (NARI Information & Communication Technology Co., Ltd., Nanjing 210008, China)
  • Online:2025-10-27 Published:2025-10-27

摘要:
摘要:本文提出一种基于知识蒸馏技术的电力视觉系统,旨在解决电力视觉模型在资源受限环境中的应用难题。该系统通过动态选取性能优秀的大型电力视觉模型作为教师模型,并智能构建一个轻量化的小型模型作为学生模型,利用知识蒸馏技术,对比学生模型的输出与教师模型的输出,通过最小化两者之间的差异,实现知识的有效迁移。该研究成果应用于电力视觉领域,构建包括电力视觉数据集、教师模型动态选择、学生模型智能构建、知识蒸馏、优化训练和模型评估等模块的电力视觉系统,实现自动化的知识蒸馏过程。基于电力视觉数据集的实验结果表明该研究成果能够在保持较高识别精度的同时,显著降低模型复杂性和计算资源消耗,提高电力视觉系统在资源受限环境中的适用性和效率。


关键词: 关键词:大模型, 知识蒸馏, 知识迁移, 深度学习, 电力视觉系统

Abstract:
Abstract:This paper proposes a power vision system based on knowledge distillation technology, aiming to solve the application challenges of power vision models in resource-constrained environments. By dynamically selecting a large-scale power vision model with high performance as the teacher model and intelligently constructing a lightweight small model as the student model, the knowledge distillation technology is utilized to compare the outputs of the student model and the teacher model. By minimizing the difference between the two models, effective knowledge transfer is achieved. The research results are applied to the power vision field, which constructs a power vision system including power vision datasets, teacher model selection, student model construction, knowledge distillation, optimized training and model evaluation. Then an automated knowledge distillation process is achieved. Experimental results based on the power vision datasets show that the research can significantly reduce model complexity and computational resource consumption while maintaining high recognition accuracy, improving the applicability and efficiency of the power vision system in resource-constrained environments.

Key words: Key words: large-scale model, knowledge distillation, knowledge transfer, deep learning, power vision system

中图分类号: