Novel multiple object tracking method for yellow feather broilers in a flat breeding chamber based on improved YOLOv3 and deep SORT

Xiuguo Zou, Zhengling Yin, Yuhua Li, Fei Gong, Yungang Bai, Zhonghao Zhao, Wentian Zhang, Yan Qian, Maohua Xiao

Abstract


Aiming at the difficulties of the health status recognition of yellow feather broilers in large-scale broiler farms and the low recognition rate of current models, a novel method based on machine vision to achieve precise tracking of multiple broilers was proposed in this paper. Broilers’ behavior in the breeding environment can be tracked to analyze their behaviors and health status further. An improved YOLOv3 (You Only Look Once v3) algorithm was used as the detector of the Deep SORT (Simple Online and Realtime Tracking) algorithm to realize the multiple object tracking of yellow feather broilers in the flat breeding chamber, which replaced the backbone of YOLOv3 with MobileNetV2 to improve the inference speed of the detection module. The DRSN (Deep Residual Shrinkage Network) was integrated with MobileNetV2 to enhance the feature extraction capability of the network. Moreover, in view of the slight change in the individual size of the yellow feather broiler, the feature fusion network was also redesigned by combining it with the attention mechanism to enable the adaptive learning of the objects’ multi-scale features. Compared with traditional YOLOv3, improved YOLOv3 achieves 93.2% mAP (mean Average Precision) and 29 fps (frames per second), representing high-precision real-time detection performance. Furthermore, while the MOTA (Multiple Object Tracking Accuracy) increases from 51% to 54%, the IDSW (Identity Switch) decreases by 62.2% compared with traditional YOLOv3-based objective detectors. The proposed algorithm can provide a technical reference for analyzing the behavioral perception and health status of broilers in the flat breeding environment.
Keywords: yellow feather broiler, flat breeding chamber, multiple object tracking, improved YOLOv3, Deep SORT
DOI: 10.25165/j.ijabe.20231605.7836

Citation: Zou X G, Yin Z L, Li Y H, Gong F, Bai Y G, Zhao Z H, et al. Novel multiple object tracking method for yellow feather broilers in a flat breeding chamber based on improved YOLOv3 and deep SORT. Int J Agric & Biol Eng, 2023; 16(5): 44–55.

Keywords


yellow feather broiler, flat breeding chamber, multiple object tracking, improved YOLOv3, Deep SORT

Full Text:

PDF

References


Mottet A, Tempio G. Global poultry production: current state and future outlook and challenges. World's Poultry Science Journal, 2017; 73(2): 245-256.

Xiao L, Ding K, Gao Y, Rao X. Behavior-induced health condition monitoring of caged chickens using binocular vision. Computers and Electronics in Agriculture, 2019; 156: 254-262.

Fujii T, Yokoi H, Tada T, Suzuki K, Tsukamoto K. Poultry tracking system with camera using particle filters. 2008 IEEE International Conference on Robotics and Biomimetics, Bangkok, Thailand, Feb. 22-25, 2009; pp.1888-1893.

Ahrendt P, Gregersen T, Karstoft H. Development of a real-time computer vision system for tracking loose-housed pigs. Computers and Electronics in Agriculture, 2011; 76(2): 169-174.

Nakarmi A D, Tang L, Xin H. Automated tracking and behavior quantification of laying hens using 3D computer vision and radio frequency identification technologies. Transactions of the ASABE, 2014; 57(5): 1455-1472.

Mittek M, Psota E, Carlson J D, Pérez L C, Schmidt T, Mote B. Tracking of group-housed pigs using multi-ellipsoid expectation maximisation. IET Computer Vision, 2018; 12(2): 121-128.

Redmon J, Divvala S, Girshick R B, Farhadi A. You Only Look Once: unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 27-30, 2016; pp.779-788.

Redmon J, Farhadi A. YOLO9000: better, faster, stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, Jul. 21-26, 2017; pp.6517-6525.

Redmon J, Farhadi A. YOLOv3: an incremental improvement. arXiv, 2018; doi: 10.48550/arXiv.1804.02767

Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, et al. SSD: Single Shot MultiBox Detector. European Conference on Computer Vision (ECCV), Amsterdam, Netherlands, Oct. 11-14, 2016; pp.21-37.

Fu C-Y, Liu W, Ranga A, Tyagi A, Berg A C. DSSD: Deconvolutional Single Shot Detector. arXiv, 2017; doi: 10.48550/arXiv.1701.06659.

Li Z, Zhou F. FSSD: Feature fusion single shot multibox detector. arXiv, 2017; doi: 10.48550/arXiv.1712.00960.

Lin T-Y, Goyal P, Girshick R B, He K, Dollár P. Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020; 42(2): 318-327.

Qin Z, Li Z, Zhang Z, Bao Y, Yu G, Peng Y, et al. ThunderNet: towards real-time generic object detection on mobile devices. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), Oct. 27 - Nov. 2, 2019; pp.6717-6726.

Sun Q, Wu T, Zou X, Qiu X, Yao H, Zhang S, et al. Multiple object tracking for yellow feather broilers based on foreground detection and deep learning. INMATEH-Agricultural Engineering, 2019; 58(2): 155-166.

Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, Jun. 23-28, 2014; pp.580-587.

Girshick R. Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, Dec. 7-13, 2015; pp.1440-1448.

Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015; 39(6): 1137-1149.

Sun L, Zou Y, Li Y, Cai Z, Li Y, Luo B, et al. Multi target pigs tracking loss correction algorithm based on Faster R-CNN. Int J Agric & Biol Eng, 2018; 11: 192-197.

Lin C-Y, Hsieh K-W, Tsai Y-C, Kuo Y-F. Monitoring chicken heat stress using deep convolutional neural networks. 2018 ASABE Annual International Meeting, Detroit, Michigan, USA, Jul. 29 - Aug. 1, 2018; doi: 10.13031/aim.201800314.

Bewley A, Ge Z, Ott L, Ramos F, Upcroft B. Simple online and realtime tracking. 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, Arizona, USA, Sep. 25-28, 2016; pp.3464-3468.

Wojke N, Bewley A, Paulus D. Simple online and realtime tracking with a deep association metric. 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, Sep. 17-20, 2017; 3645-3649.

Yao H, Sun Q, Zou X, Wang S, Zhang S, Zhang S, et al. Research of yellow-feather chicken breeding model based on small chicken chamber. INMATEH-Agricultural Engineering, 2018; 56(3): 91-100.

Bochkovskiy A, Wang C-Y, Liao H. YOLOv4: optimal speed and accuracy of object detection. arXiv, 2020; doi: 10.48550/arXiv.2004.10934.

Sandler M, Howard A G, Zhu M, Zhmoginov A, Chen L-C. MobileNetV2: inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, Jun. 18-23, 2018; pp.4510-4520.

Zhao M, Zhong S, Fu X-y, Tang B, Pecht M. Deep residual shrinkage networks for fault diagnosis. IEEE Transactions on Industrial Informatics, 2020; 16: 4681-4690.

Woo S, Park J, Lee J-Y, Kweon I S. CBAM: Convolutional Block Attention Module. European Conference on Computer Vision (ECCV), Munich, Germany, Sep. 10-13, 2018; 3-19.

Hu J, Shen L, Albanie S, Sun G, Wu E. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020; 42(8): 2011-2023.

Loshchilov I, Hutter F. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv, 2017; doi: 10.48550/arXiv.1608.03983.

Powers D. M. W. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv, 2020; doi: 10.48550/arXiv.2010.16061.

Everingham M, Winn J. The Pascal visual object classes challenge 2012 (voc2012) development kit. Pattern Analysis, Statistical Modelling and Computational Learning, Tech., 2011.

Maaten L, Hinton G E. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008; 9(86): 2579-2605.

Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 27-30, 2016; pp.2921-2929.

Milan A, Leal-Taixé L, Reid I, Roth S, Schindler K. MOT16: a benchmark for multi-object tracking. arXiv, 2016; doi: 10.48550/arXiv.1603.00831.




Copyright (c) 2023 International Journal of Agricultural and Biological Engineering

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

2023-2026 Copyright IJABE Editing and Publishing Office