Depth recovery for unstructured farmland road image using an improved SIFT algorithm
DOI:
https://doi.org/10.25165/ijabe.v12i4.4821Keywords:
scale-invariant feature transform (sift), feature matching, canny operator, energy method of feature point, farmland road, depth recovery, visual navigationAbstract
Road visual navigation relies on accurate road models. This study was aimed at proposing an improved scale-invariant feature transform (SIFT) algorithm for recovering depth information from farmland road images, which would provide a reliable path for visual navigation. The mean image of pixel value in five channels (R, G, B, S and V) were treated as the inspected image and the feature points of the inspected image were extracted by the Canny algorithm, for achieving precise location of the feature points and ensuring the uniformity and density of the feature points. The mean value of the pixels in 5×5 neighborhood around the feature point at an interval of 45º in eight directions was then treated as the feature vector, and the differences of the feature vectors were calculated for preliminary matching of the left and right image feature points. In order to achieve the depth information of farmland road images, the energy method of feature points was used for eliminating the mismatched points. Experiments with a binocular stereo vision system were conducted and the results showed that the matching accuracy and time consuming for depth recovery when using the improved SIFT algorithm were 96.48% and 5.6 s, respectively, with the accuracy for depth recovery of –7.17%-2.97% in a certain sight distance. The mean uniformity, time consuming and matching accuracy for all the 60 images under various climates and road conditions were 50%-70%, 5.0-6.5 s, and higher than 88%, respectively, indicating that performance for achieving the feature points (e.g., uniformity, matching accuracy, and algorithm real-time) of the improved SIFT algorithm were superior to that of conventional SIFT algorithm. This study provides an important reference for navigation technology of agricultural equipment based on machine vision. Keywords: scale-invariant feature transform (sift), feature matching, canny operator, energy method of feature point, farmland road, depth recovery, visual navigation DOI: 10.25165/j.ijabe.20191204.4821 Citation: Yao L J, Hu D, Yang Z D, Li H B, Qian M B. Depth recovery for unstructured farmland road image using an improved SIFT algorithm. Int J Agric & Biol Eng, 2019; 12(4): 141–147.References
West B. Enhanced visual navigation for mobile robots. Auburn: Auburn University, Alabama,USA, 2014.
Feng J, Liu G, Si Y. Algorithm based on image processing technology to generate navigation directrix in orchard. Transactions of the CSAM, 2012; 43(7): 185–189, 184. (in Chinese).
Li J, Chen B, Liu Y. Detection for navigation route for cotton harvester based on machine vision. Transactions of the CSAE, 2013; 29(11): 11–19. (in Chinese).
Li Y, Wei L. Navigation line of vision extraction algorithm based on dark channel. Acta Optica Sinica, 2015; 35(2): 0215001. (in Chinese).
Bengochea-Guevara J M, Conesa-Muñoz J, Andújar D, Ribeiro A. Merge fuzzy visual servoing and GPS-based planning to obtain a proper navigation behavior for a small crop-inspection robot. Sensors, 2016; 16: 276.
Bao J, Zhang Y, Su X, Zheng R. Unpaved road detection based on spatial fuzzy clustering algorithm. EURASIP Journal on Image and Video Processing, 2018; 2018: 26.
Choi K H, Han S K, Han S H, Park K H, Kim K S, Kim S. Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields. Computers and Electronics in Agriculture, 2015; 113: 266–274.
Cong Y, Peng J J, Sun J, Zhu L L, Tang Y D. V-disparity based UGV obstacle detection in rough outdoor terrain. Acta Automatica Sinica, 2010; 36: 667–673.
Tang J, Jing X, He D. Visual navigation control for agricultural robot using serial BP neural network. Transactions of the CSAE, 2011; 27(2): 194–198.
Kostavelis I, Boukas E, Nalpantidis L, Gasteratos A. Stereo-based visual odometry for autonomous robot navigation. International Journal of Advanced Robotic Systems, 2016; 13: 21.
Vitor G B, Victorino A C, Ferreira J V. Comprehensive performance analysis of road detection algorithms using the common urban Kitti-road benchmark. Intelligent Vehicles Symposium Proceedings, 2014; pp.19–24.
Li Y, Olson E B. A general purpose feature extractor for light detection and ranging data. Sensors, 2010; 10: 10356–10375.
Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004; 60: 91–110.
Shao N, Li H G, Liu L, Zhang Z L. Stereo vision robot obstacle detection based on the SIFT. 2010 Second WRI Global Congress on Intelligent Systems, 2010; pp.274–277.
Guo A, Xiong J, Xiao D. Computation of picking point of litchi and its binocular stereo matching based on combined algorithms of harris and SIFT. Transactions of the CSAM, 2015; 46(12): 11–17. (in Chinese)
Li S, Zhou J, Ji C, Tian G, Gu B, Wang H. Moving obstacle detection based on panoramic vision for intelligent agricultural vehicle. Transactions of the CSAM, 2013; 12: 041. (in Chinese)
Tian G, An Q, Ji C. Real-time motion detection for intelligent agricultural vehicle based on stereo vision. Transactions of the CSAM, 2013; 44(7): 210–215. (in Chinese).
Li K, Zhang L. Research on underwater navigation algorithm based on SIFT matching algorithm. DEStech Transactions on Engineering and Technology Research, 2017.
Ren S, Chang W, Liu X. SAR image matching method based on improved SIFT for navigation system. Progress in Electromagnetics Research, 2011; 18: 259–269.
Ni X, Ding L, Jiang T, Hu S. A remote sensing image registration algorithm by obtaining uniform control points based on invariant feature. Science of Surveying and Mapping, 2011; 36: 70–72.
Wu C, Niu H, Liang W. A new homogenized feature based multi-spectral image registration method. 2012 Third Global Congress on Intelligent Systems, 2012; pp.198–201.
Schalkoff R J. Digital image processing and computer vision. Wiley New York, 1989.
Yao H B, Weng J P, Cui H H, Cheng X S. Binocular stereo matching robust algorithm based on energy of marked points. Laser & Optoelectronics Progress, 2016; 53(2): 215011–215016. (in Chinese)
Downloads
Published
How to Cite
Issue
Section
License
IJABE is an international peer reviewed open access journal, adopting Creative Commons Copyright Notices as follows.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).