ZHANG Zhengkai, QI Lang
(School of Mechanical and Electrical Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China)
Abstract: In order to improve the accuracy of threaded hole object detection, combining a dual camera vision system with the Hough transform circle detection, we propose an object detection method of artifact threaded hole based on Faster region-ased convolutional neural network (Faster R-CNN). First, a dual camera image acquisition system is established. One industrial camera placed at a high position is responsible for collecting the whole image of the workpiece, and the suspected screw hole position on the workpiece can be preliminarily selected by Hough transform detection algorithm. Then, the other industrial camera is responsible for collecting the local images of the suspected screw holes that have been detected by Hough transform one by one. After that, ResNet50-based Faster R-CNN object detection model is trained on the self-built screw hole data set. Finally, the local image of the threaded hole is input into the trained Faster R-CNN object detection model for further identification and location. The experimental results show that the proposed method can effectively avoid small object detection of threaded holes, and compared with the method that only uses Hough transform or Faster RCNN object detection alone, it has high recognition and positioning accuracy.
Key words: object detection; threaded hole; deep learning; region-based convolutional neural network (Faster R-CNN); Hough transform
References
[1]Ni A J, Guo Q, Zhao J, et al. Analysis of position precision measurement method for large cabin threaded hole. Journal of Astronautic Metrology and Measurement, 2018, 38(4): 7-11.
[2]Wang D H. Researching of automatic screws fastening equipment and location method based on machine vision. Guangzhou: South China University of Technology, 2014.
[3]Guo J, Luo H, Zhang D D. Research on a high precision screw positioning detection method based on machine vision. Navigation and Control, 2016, 15(3): 106-112.
[4]Xu X, Xuan J Y, Cao T T, et al. Surface defect detection of cylindrical metal workpiece based on faster R-CNN. Software Guide, 2019, 18(5): 130-133.
[5]Wei Z Y, Wen C, Xie K, et al. Real-time face detection for mobile devices with optical flow estimation. Journal of Computer Applications, 2018, 38(4): 1146-1150.
[6]Jafri R, Arabnia H R. A survey of face recognition techniques. Journal of Information Processing Systems, 2009, 5(2): 41-68.
[7]Rui T, Fei J C, Zhou Y, et al. Pedestrian detection based on deep convolutional neural network. Computer Engineering and Applications, 2016, 52(13): 162-166.
[8]Ross G, Jeff D, Trevor D, et al.Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.Washington: IEEE Computer Society, 2014: 580-587.
[9]Ross G.Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision.Piscataway: IEEE, 2015: 1440-1448.
[10]Ren S Q, He K M, Ross G, et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[11]Joseph R, Santosh D, Ross G, et al. You only look once: Real-time object detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recongnition, Lasvegas, IEEE, 2016: 27-30.
[12]Redmon J, Farhadi A.YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2017: 6517-6525.
[13]Liu W, Anguelov D, Erhan D, et al. SSD: single shot multi box detector. In: Proceedings of European Conference on Computer Vision-ECCV 2016. Heidelberg: Springer, 2016: 21-37.
[14]Zhang Y J. Image processing and analysis. Beijing: Tsinghua University Press, 2001.
[15]Zhang W, An L L, Zhang C, et al. Optimizing the cutting path of rectangle stock based on ant colony algorithm. Mechanical Science and Technology for Aerospace Engineering, 2011, 30(3): 390-393.
[16]Zeiler M D, Fergus R. Visualizing and understanding convolutional networks. In: Proceedings of European Conference on Computer Vision. Heidelberg: Springer, 2014: 818-833.
[17]Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems. New York : Curran Associates Press, 2012: 1097-1105.
[18]Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. [2019-07-03]. https://arxiv.org/abs/1409.1556.
[19]He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2016: 770-778.
[20]Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651.
基于Faster R-CNN的工件螺纹孔目标检测
张征凯, 齐浪
(西安建筑科技大学 机电工程学院, 陕西 西安 710055)
摘要: 为了提高螺纹孔目标检测的准确率, 结合双相机视觉系统与Hough变换圆检测算法, 提出了一种基于Faster R-CNN的螺纹孔目标检测方法。 首先建立了由双相机组成的图像获取系统, 通过安置在高处的工业相机采集工件整体图像, 利用Hough变换圆检测算法初步筛选出工件上的疑似螺纹孔的位置, 并驱动第二个工业相机逐个在近处采集经Hough变换检测出的疑似螺纹孔的局部精确图像。 然后, 在自建的螺纹孔数据集上训练以ResNet50为基础网络的Faster R-CNN目标检测模型。 最后, 将螺纹孔处局部图像输入训练好的Faster R-CNN目标检测模型进一步识别并进行定位。 实验结果表明, 该方法能有效地避免螺纹孔小目标检测, 相对于单独使用Hough变换方法或者Faster R-CNN目标检测方法检测螺纹孔, 具有更高的识别和定位精度。
关键词: 目标检测; 螺纹孔; 深度学习; 基于区域的卷积神经网络; Hough变换
引用格式: ZHANG Zhengkai, QI Lang. Object detection of artifact threaded hole based on Faster R-CNN. Journal of Measurement Science and Instrumentation, 2021, 12(1): 107-114. DOI: 10.3969/j.issn.1674-8042.2021.01.014
[full text view]