Apple flower phenotype detection method based on YOLO-FL and application of intelligent flower thinning robot
DOI:
https://doi.org/10.25165/ijabe.v18i3.9110Keywords:
apple flower, YOLO-FL, deep learning, intelligent flower thinning robotAbstract
In intelligent flower thinning robot applications, accurate and efficient apple flower detection is the key to realizing automated fruit tree thinning operations. However, complex orchard environments and diverse flower characteristics pose many challenges to apple blossom detection, such as shading, light variations, flower densities, and so on. To address these challenges, this study proposes an improved model based on the YOLO target detection framework which is named the YOLO-FL apple flower detection model. The model enhances the feature extraction capability by optimizing the Backbone part with EC3DFM structure, while introducing MFEM structure in the Neck part to improve the feature fusion effect. In addition, the ABRLoss loss function is used to optimize the prediction results of the prediction frame, and it also adds the SimAM attention mechanism to the middle two detection heads in the Neck part, which further improves the detection performance of the model. The experimental results respectively show that YOLO-FL achieves 74.63%, 73.82%, and 79.97% accuracy, recall, and mean average precision on the test set, which shows significant improvement over the benchmark model. Meanwhile, the model size was only 4693 KB, demonstrating high efficiency and storage advantages. After deploying the YOLO-FL model to the intelligent flower thinning robot, the frame rate of the test image was 40.7 FPS, the average missed detection rate was 7.26%, the false detection rate was 6.89%, and the model was able to efficiently complete the apple flower detection in the complex orchard environment. This study provides an effective solution and technical support for the application of image recognition technology in intelligent flower thinning robots. Keywords: apple flower, YOLO-FL, deep learning, intelligent flower thinning robot DOI: 10.25165/j.ijabe.20251803.9110 Citation: Gao A, Du Y H, Li Y Q, Song Y P, Ren L L. Apple flower phenotype detection method based on YOLO-FL and application of intelligent flower thinning robot. Int J Agric & Biol Eng, 2025; 18(3): 236–246.References
Bi S H, Li X, Shen T, Xu Y, Ma L Y. Apple classification based on evidence theory and multiple models. Transactions of the CSAE, 2022; 38(13): 141-149. (in Chinese)
Chen Z, Yu L, Liu W, Zhang J, Wang N, Chen X S. Research progress of fruit color development in apple (Malus domestica Borkh). Plant Physiology and Biochemistry, 2021; 162: 267-279.
Hussain M, He L, Schupp J, Lyons D, Heinemann P. Green fruit segmentation and orientation estimation for robotic green fruit thinning of apples. Computers and Electronics in Agriculture, 2023; 207: 107734.
Wang D, He D. Channel pruned YOLO V5s-based deep learning approach for rapid and accurate apple fruitlet detection before fruit thinning. Biosystems Engineering, 2021; 210(6): 271-281.
Shang Y Y, Xu X S, Jiao Y T, Wang Z, Hua Z X, Song H B. Using lightweight deep learning algorithm for real-time detection of apple flowers in natural environments. Computers and Electronics in Agriculture, 2023; 207: 107765.
Chen G F, Chen Z Y, Wang Y L, Fan G Q, Li H Q. Research on detection method of apple flower based on data-enhanced deep learning. Journal of Chinese Agricultural Mechanization, 2022; 43(5): 148-155. (in Chinese)
Datt R M, Kukreja V. Phenological stage recognition model for apple crops using transfer learning. 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), 2022; pp.1537-1542. DOI: 10.1109/ICACITE53722.2022.9823711.
Zhou X Z, Sun G X, Xu N M, Zhang X L, Cai J Q, Yuan Y P, et al. A method of modern standardized apple orchard flowering monitoring based on S-YOLO. Agriculture, 2023; 13(2): 380.
Mu X Y, He L, Heinemann P, Schupp J, Karkee M. Mask R-CNN based apple flower detection and king flower identification for precision pollination. Smart Agricultural Technology, 2023; 4: 100151.
Yang R J, Li W F, Shang X N, Zhu D P, Man X Y. KPE-YOLOv5: An improved small target detection algorithm based on YOLOv5. Electronics, 2023; 12(4): 817.
Li Y D, Xue J X, Zhang M Y, Yin J Y, Liu Y, Qiao X D, et al. YOLOv5-ASFF: A multistage strawberry detection algorithm based on improved YOLOv5. Agronomy, 2023; 13(7): 1901.
Shi H K, Xiao W F, Zhu S P, Li L B, Zhang J F. CA-YOLOv5: Detection model for healthy and diseased silkworms in mixed conditions based on improved YOLOv5. Int J Agric & Biol Eng, 2023; 16(6): 236-245.
Ding X H, Zhang Y Y, Ge Y X, Zhao S J, Song L, Yue X Y, et al. UniRepLKNet: A universal perception large-kernel ConvNet for audio, video, point cloud, time-series and image recognition. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2024. DOI: 10.1109/CVPR52733.2024.00527.
Lin X, Yue J T, Chan K C K, Qi L, Ren C, Pan J S, et al. Multi-task image restoration guided by robust DINO features. arXiv: 2312.01677, 2023; DOI: 10.48550/arXiv.2312.01677.
Liu X, Wang Y, Yu D, Yuan Z. YOLOv8-FDD: A real-time vehicle detection method based on improved YOLOv8. IEEE Access, 2024; 12: 136280-136296. DOI: 10.1109/ACCESS.2024.3453298.
Zhang H, Xu C, Zhang S J. Inner-IoU: More effective intersection over union loss with auxiliary bounding box. arXiv: 2311.02877, 2023. DOI: 10.48550/arXiv.2311.02877.
Zhang P R, Zhang G X, Yang K H. APNet: Accurate positioning deformable convolution for UAV image object detection. IEEE Latin America Transactions, 2024; 22: 304-311.
Hu Z C, Wang Y, Wu J P, Xiong W L, Li B L. Improved lightweight rebar detection network based on YOLOv8s algorithm. Advances in Computer, Signals and Systems, 2023; 7: 107-117.
Ma S L, Xu Y. MPDIoU: A loss for efficient and accurate bounding box regression. arXiv: 2307.07662, 2023. DOI: 10.48550/arXiv.2307.07662.
He J, Zhang S, Yang C, Wang H, Gao J, Huang W, et al. Pest recognition in microstates state: an improvement of YOLOv7 based on Spatial and Channel Reconstruction Convolution for feature redundancy and vision transformer with Bi-Level Routing Attention. Front. Plant Sci, 2024; 15: 1327237.
Chen B J, Zhang W H, Wu W B, Li Y R, Chen Z L, Li C L ID-YOLOv7: an efficient method for insulator defect detection in power distribution network. Front. Neurorobot, 2024; 17: 1331427. DOI: 10.3389/fnbot.2023.1331427.
Woo S, Park J, Lee J, Kweon I. CBAM: Convolutional block attention module. ArXiv: 1807.06521v2, 2018. DOI: 10.48550/arXiv/abs/1807.06521.
Hu J, Shen L, Sun G. Squeeze-and-excitation networks. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2018. DOI: 10.1109/CVPR.2018.00745.
Hou Q B, Zhou D Q, Feng J S. Coordinate attention for efficient mobile network design. IEEE/CF Conference on Computer Vision and Patern Recognition, Nashville, USA, 2021. DOI: 10.1109/CVPR46437.2021.01350.
Wang Q L, Wu B G, Zhu P F, Li P H, Zuo W M, Hu Q H. ECA-Net: Efficient channel attention for deep convolutional neural networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020; pp.11531-11539. DOI: 10.1109/CVPR42600.2020.01155.
Yang L X, Zhang R Y, Li L D, Xie X H. SimAM: A simple, parameter-free attention module for convolutional neural networks. International Conference on Machine Learning, 2021. Available: https://api.semanticscholar.org/CorpusID:235825945. Accessed on [2024-12-20].
Gao A, Ren H, Song Y P, Ren L L, Zhang Y, Han X. Construction and verification of machine vision algorithm model based on apple leaf disease images. Front Plant Sci, 2023; 14: 1246065.
Jiang Y Y, Chen X Y, Li G M, Wang F, Ge H Y. Graph neural network and its research progress in field of image processing. Computer Engineering and Applications, 2023; 59(7): 15-30.
Ouyang D L, He S, Zhang G Z, Luo M Z, Guo H Y, Zhan J, et al. Efficient multi-scale attention module with cross-spatial learning. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023; DOI: 10.1109/ICASSP49357.2023.10096516.
Roy A M, Bhaduri J. DenseSPH-YOLOv5: An automated damage detection model based on DenseNet and Swin-Transformer prediction head-enabled YOLOv5 with attention mechanism. Adv. Eng. Informatics, 2023; 56: 102007.
Li C Y, Li L L, Jiang H L, Weng K H, Geng Y F, Li L, et al. YOLOv6: A single-stage object detection framework for industrial applications. ArXiv: 2209.02976. DOI: 10.48550/arXiv.2209.02976.
Wang C Y, Alexey Bochkovskiy, Liao H Y. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023; pp.7464-7475. DOI: 10.1109/CVPR52729.2023.00721.
Hussain M. YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection. Machines, 2023; 11(7): 677.
Downloads
Published
How to Cite
Issue
Section
License
IJABE is an international peer reviewed open access journal, adopting Creative Commons Copyright Notices as follows.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).