FU Luhua1,2, WANG Chunyun1, HE Jingjing1, CUI Jianguo1, ZHANG Baoshang2, WANG Peng1,2
(1. State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China;2. Science and Technology on Electro-Optic Control Laboratory, Luoyang Institute of Electro-Optic Equipment, Aviation Industry Corporation of China, Luoyang 471009, China)
Abstract: Aiming at the problem that the traditional visual pose measurement depends on the known structure information or artificial mark of the target in the scene, a relative pose measurement method is proposed and then the corresponding system is designed based on feature matching. The relative pose measurement does not need to know the prior information of the target in the scene, and the system has high measurement accuracy. Firstly, the binocular camera is used to collect the sequence images of the targets in the scene, the computer uses the accelerated KAZE (AKAZE) algorithm to extract feature points of the image, the improved k-nearest neighbors (KNN) and random sample consensus (RANSAC) algorithms are used to perform feature points matching on adjacent images and eliminate mismatched points. Afterwards, the three-dimensional coordinates of the feature points are obtained by triangulation measurement and bundle adjustment optimization, and the three-dimensional feature point library is established by using three-dimensional coordinates and two-dimensional image feature vectors. During pose measurement, the monocular camera is used to collect the image of the scene target, and the AKAZE algorithm is used to extract feature points of the image to be measured. The obtained feature points are matched with the three-dimensional feature point library, and then EPnP + Gauss-Newton method is used to solve the relative pose. In the experiment, a high-precision turntable is used to rotate camera, and the camera takes multiple images for measurement. The results show that the maximum measurement error of the designed pose measurement system is less than 0.2° in the range of -20° to 20°, which can meet the application requirements.
Key words: pose measurement; non-cooperative targets; AKAZE; image matching; three-dimensional reconstruction
References
[1]LIU C K, WANG X X, WANG J, et al. Fast tracking and accurate pose estimation of space flying target based on monocular vision//IEEE Chinese Guidance, Navigation and Control Conference, Aug. 12-14, 2016, Nanjing, China. New York: IEEE, 2016: 2088-2093.
[2]DOGAN E, EREN G, WOLF C, et al. Multi-view pose estimation with mixtures of parts and adaptive viewpoint selection. IET Computer Vision, 2018, 12(4): 403-411.
[3]SUN C K, DONG H, ZHANG B S, et al. An orthogonal iteration pose estimation algorithm based on an incident ray tracking model. Measurement Science and Technology, 2018, 29(9): 095402.
[4]TAO T F, ZHANG Z W, YANG X Y. Visual perception method based on human pose estimation for humanoid robot imitating human motions//2nd International Conference on Control, Robotics and Intelligent System, Aug. 20-22, 2021, Qingdao, China. New York: ACM, 2021: 62-69.
[5]ZHANG L Z, WU D M, REN Y Q. Pose measurement for non-cooperative target based on visual information. IEEE Access, 2019, 7: 106179-106194.
[6]KELSEY J M, BYRNE J, COSGROVE M, et al. Vision-based relative pose estimation for autonomous rendezvous and docking//IEEE Aerospace Conference-TSAT Advanced Network Services and Routing Architecture, Mar. 04-11, 2006, Big Sky, USA. New York: IEEE, 2006: 2038-2058.
[7]PENG J Q, XU W F, YUAN H. An efficient pose measurement method of a space non-cooperative target based on stereo vision. IEEE Access, 2017, 5: 22344-22362
[8]ZHANG L Z, WU D M, REN Y Q. Pose measurement for non-cooperative target based on visual information. IEEE Access, 2019, 7: 106179-106194.
[9]LOWE D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91-110.
[10]BAY H, ESS A, TUYTELAARS T, et al. Speeded-Up robust features (SURF). Computer Vision and Image Understanding, 2008, 110(3): 346-359.
[11]ALCANTARILLA P F, NUEVO J, BARTOLI A. Fast explicit diffusion for accelerated features in nonlinear scale//The British Machine Vision Conference, 2013 Sept. 06, Durham: Bristol, England. BMVA Press, 2013: 1-13.11.
[12]YAN L, CHEN R, SUN H B, et al. A novel bundle adjustment method with additional ground control point constraint. Remote Sensing Letters, 2017, 8(1): 68-77.
[13]LEPETIT V, MORENO-NOGUER F, FUA P. EPnP: An accurate O(n) solution to the PnP problem. International Journal of Computer Vision, 2009, 81(2): 155-166.
基于特征匹配的摄像机位姿测量方法
付鲁华1,2, 王春赟1, 何晶晶1, 崔建国1, 张宝尚2, 王鹏1,2
(1. 天津大学 精密测试技术及仪器国家重点实验室, 天津 300072;2. 中航工业洛阳电光设备研究所 光电控制技术重点实验室, 河南 洛阳 471009)
摘要:针对传统视觉位姿测量方法依赖场景内目标的已知结构信息或者人为标记的问题, 设计了一种基于特征匹配的相对位姿测量方法和系统, 在进行相对位姿测量时无需知道场景内目标的先验信息, 且有较高的测量精度。 首先, 使用双目相机对场景内目标进行序列图像采集, 计算机采用AKAZE算法对图像进行特征提取。 然后, 采用改进的KNN与RANSAC算法对相邻图像进行特征匹配并剔除误匹配点, 得到正确的匹配点对后, 采用三角法测量和光束法平差优化获得特征点的三维坐标, 利用三维坐标与二维图像特征向量建立三维特征点库。 之后, 位姿测量时先使用单目相机对场景目标进行图像采集, 再对待测图像进行AKAZE特征提取, 将得到的特征点与特征点库进行匹配, 采用EPnP+Gauss-Newton方法求解出待测图像对应的相机位姿。 实验中, 利用高精度转台转动摄像机并拍摄多幅图像进行测量, 结果表明, 位姿测量系统在[-20°, 20°]范围内最大测量误差不超过0.2°, 可以满足应用需求。
关键词:位姿测量; 非合作目标; AKAZE; 图像匹配; 三维重建
引用格式:FU Luhua, WANG Chunyun, HE Jingjing, et al. Camera pose measurement method based on feature matching. Journal of Measurement Science and Instrumentation, 2023, 14(1): 1-8. DOI: 10.3969/j.issn.1674-8042.2023.01.001
[full text view]