此页面上的内容需要较新版本的 Adobe Flash Player。

获取 Adobe Flash Player

An improved algorithm for adapting YOLOv5 to helmet wearing and mask wearing detection applications

ZHANG Youyuan1, YANG Guiqin1, DIAO Guangchao2, SUN Cunwei3, WANG Xiaopeng1

 

(1. School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China;2. School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China;3. School of Computer Science and Engineering, University of Electronic Science and Technology, Chengdu 611731, China)

 

Abstract: In order to achieve more efficient detection of wearing helmets and masks in natural scenes, an improved algorithm model YOLOv5+ is proposed based on the deep learning algorithm YOLOv5. For target detection tasks, small targets are usually detected on a large feature map. Considering that most of the detected objects are small-scale targets. Therefore, when the input image size is 640×640 pixels by default, a feature map of size 160×160 pixels is added to the detection layer of the original algorithm, and complete intersection over union (CIoU) is selected as the loss function to achieve more effective detection of helmet wearing and mask wearing. The experimental results show that the mean average precision (mAP-50) of the YOLOv5+ network model reaches 93.8% and 92.3% on the helmet-wearing and mask-wearing datasets, respectively, which is both improved compared to the precision of the original algorithm. This method not only meets the speed requirement of real-time detection, but also improves the precision of detection.

 

Key words: YOLOv5; CIoU; helmet wear detection; mask wear detection; adaptation to small target detection

 

References

 

[1]GIRSHICK R, DONAHUE J, DARRELL T, et al. Region-based convolutional networks for accurate object detection and segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(1): 142-158.
[2]GIRSHICK R. Fast R-CNN//IEEE International Conference on Computer Vision (ICCV), Dec. 13-16, 2015, Santiago, Chile. New York: IEEE, 2015: 1440-1448.
[3]REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactionson Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149.
[4]LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector//European Conference on Computer Vision, Oct. 11-14, 2016, Amsterdam, Netherlands. Berlin: Springer, 2016: 21-37.
[5]REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once:unified, real-time object  detection//IEEE Conference on Computer Vision and Pattern Recognition, Jun. 26-30, 2016, Las Vegas, Nevada, USA. New York: IEEE, 2016: 779-788.
[6]REDMON J, FARHADI A. YOLO9000: better, faster, stronger//IEEE Conference on Computer Vision & Pattern Recognition, Jul. 21-26, 2017, Hawaii, USA. New York: IEEE, 2017: 6517-6525.
[7]REDMON J, FARHADI A. YOLOv3: an incremental improvement. arXiv preprint arXiv: 1804.02767, 2018.
[8]GUO Z Y, HAN H Y, HE L G, et al. Gesture recognition algorithm and application based on improved YOLOv4. Journal of North University if China (Natural Science Edition), 2021, 42(3): 223-231.
[9]JOCHER G. YOLOv5. https://github.com/ultralytics/yolov5.
[10]LIU K Y, TANG H T, HE S, et al. Performance validation of YOLO variants for object detection// International Conference on Bioinformatics and Intelligent Computing, Jan. 22-24, 2021, Harbin, China. Harbin: Association for Computing Machinery, 2021: 239-243.
[11]LIU X H, YE X. Application of skin color detection and Hu moments in helmet recognition. Journal of East China University of Science and Technology (Natural Science Edition), 2014, 40(3): 365-370.
[12]WU D M, WANG H, LI J. Safety helmet detection and identity recognition based on improved faster RCNN. Information Technology and Informatization, 2020(1): 17-20.
[13]WANG B, LI W J, TANG H H. Improved YOLOv3 algorithm and its application in helmet detection. Computer Engineering and Applications, 2020, 56(9): 33-40.
[14]XIAO J J. Face mask detection and canonical wear recognition based on YOLOv3 and YCrCb. Software, 2020, 41(7): 164-169.
[15]GUAN J L, ZHI X. Mask wearing detection method with YOLOv4 convolutional neural network.Modern Information Technology, 2020, 11(4): 9-12.
[16]TRINH H C, LE D H, KWON Y K. PANET: a GPU-based tool for fast parallel analysis of robustness dynamics and feed-forward/feedback loop structures in large-scale biological networks. PLoS ONE, 2014, 9(7): e103010.
[17]REZATOfIGHI H, TSOI N, GWAK J, et al. Generalized intersection over union: a metric and a loss for bounding box regression//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 16-20, 2019, Long Beach, CA. New York: IEEE,  2019: 658-666.
[18]ZHENG Z, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression//AAAI Conference on Artificial Intelligence, Feb. 7-12, 2020, New York, USA. Washington: AAAI Press, 2020: 12993-13000.
[19]RABZNO S L, CABATUAN M K, SYBINGCO E, et al. Common garbage classification using mobile net//International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management, Nov.29-Dec.2, 2018, Baguio City, Philippines. New York: IEEE, 2018: 18512325.


改进YOLOv5适应安全帽佩戴与口罩佩戴检测应用的算法

 

张又元1, 杨桂芹1, 刁广超2, 孙存威3, 王小鹏1

 

(1. 兰州交通大学 电子与信息工程学院, 甘肃 兰州 730070; 2. 兰州大学 信息科学与工程学院, 甘肃 兰州 730000; 3. 电子科技大学 计算机科学与工程学院, 四川 成都 611731)


摘要:为实现自然场景下对安全帽以及口罩佩戴更高效地检测, 基于深度学习YOLOv5算法提出了一种改进的算法模型YOLOv5+。 对于目标检测任务而言, 通常是在较大的特征图上去检测小目标。 考虑到所检测对象多为小尺度目标, 因此当输入图像尺寸默认为640×640像素时, 通过在原算法检测层中增加大小为160×160像素的特征图, 并选用CIoU (Complete-IoU)作为损失函数, 以实现对安全帽佩戴以及口罩佩戴更有效地检测。 实验结果表明, 在安全帽佩戴和口罩佩戴数据集上, YOLOv5+网络模型的平均检测精度(mAP-50)分别达到93.8%和92.3%, 相比原算法均有所提高。 此方法不仅满足了实时性检测的速度要求, 同时提高了检测的精度。

 

关键词:YOLOv5; CIoU; 安全帽佩戴检测; 口罩佩戴检测; 适应小目标检测

 

引用格式:ZHANG Youyuan, Yang Guiqin, DIAO Guangchao, et al.  An improved algorithm for adapting YOLOv5 to helmet wearing and mask wearing detection applications. Journal of Measurement Science and Instrumentation,  2023,  14(4): 463-472. DOI: 10.3969/j.issn.1674-8042.2023.04.009

 

[full text view]