基于Canny-YOLOv7改进模型的目标检测方法研究
首发时间:2024-07-05
摘要:针对当前军工工业产品缺陷检测能力不足的问题,以生产中普遍存在的小目标缺陷检测为切入点,提出了一种基于Canny-YOLOv7算法的融合增强模型。该模型主要通过Canny边缘检测算法识别图像中的潜在缺陷区域并对缺陷区域特征进行增强,在此基础上对YOLOv7模型进行针对性改进以提升泛化能力。首先,为增强模型对于细微缺陷特征的提取能力,使用Swin Transformer (STR)模块替换原始主干网络中的ELAN模块。其次,采用SIOU损失函数使得改进模型能快速学习到准确的目标定位。最后,采用LeakyReLU激活函数减少函数计算开销,进一步提升模型检测速度。实验结果表明,在相同数据集下,提出的增强模型平均检测精度值达到97.5%,相较于原始YOLOv7模型提升了4.6%,检测速度(FPS)为52.45,满足实时检测需求。
关键词: 深度学习 缺陷检测 YOLOv7模型 Canny算法 Swin Transformer
For information in English, please click here
Research on target defect detection method based on Canny-YOLOv7 improved model
Abstract:To address the problem of insufficient defect detection capabilities in current military and industrial products, we propose a fusion enhancement model based on the Canny-YOLOv7 algorithm, focusing on small target defect detection common in production. The model primarily utilizes the Canny edge detection algorithm to identify potential defect areas in images, enhancing the features of these areas. Building on this, we improve the YOLOv7 model to enhance its generalization ability. First, to improve the model\'s capability to extract subtle defect features, we replace the ELAN module in the original backbone network with the Swin Transformer (STR) module. Second, we incorporate the SIOU loss function, enabling the improved model to quickly learn accurate target positioning. Finally, the LeakyReLU activation function is adopted to reduce computational overhead and further improve the model\'s detection speed. Experimental results demonstrate that, under the same dataset, the proposed enhanced model achieves an average detection accuracy of 97.5%, which is 4.6% higher than the original YOLOv7 model. Additionally, the detection speed (FPS) reaches 52.45, meeting real-time detection requirements.
Keywords: deep learning defect detection YOLOv7 model Canny algorithm Swin Transformer
引用
No.****
同行评议
勘误表
基于Canny-YOLOv7改进模型的目标检测方法研究
评论
全部评论