Multi target pigs tracking loss correction algorithm based on Faster R-CNN

Longqing Sun, Yuanbing Zou, Yan Li, Zhengda Cai, Yue Li, Bing Luo, Yan Liu, Yiyang Li

Abstract


In order to solve the problem that target tracking frames are lost during the visual tracking of pigs, this research proposed an algorithm for multi target pigs tracking loss correction based on Faster R-CNN. The video of live pigs was processed by Faster R-CNN to get the object bounding box. Then, the SURF and background difference method were combined to predict whether the target pig will be occluded in the next frame. According to the occlusion condition, the maximum value of the horizontal and vertical coordinate offset of the bounding box in the adjacent two frames of the frame image in continuous N (N is the value of the video frame rate) were calculated. When bounding boxes in a video frame are merged into one bounding box, this maximum value was used to correct the current tracking frame offset in order to achieve the purpose of solving the tracking target loss problem. The experiment results showed that the success rate range of RP Faster-RCNN in the data set was 80%-97% while in term of Faster-RCNN was 40%-85%. And the average center point error of RP Faster-RCNN was 1.46 lower than Faster-RCNN which was about 2.60. The new algorithm was characterized by good robustness and adaptability, which could solve the problem of missing tracking target and accurately track multiple targets when the targets occlude each other.
Keywords: object tracking, Faster-RCNN, individual pig, target occlusion
DOI: 10.25165/j.ijabe.20181105.4232

Citation: Sun L Q, Zou Y B, Li Y, Cai Z D, Li Y, Luo B, et al. Multi target pigs tracking loss correction algorithm based on Faster R-CNN. Int J Agric & Biol Eng, 2018; 11(5): 192–197.

Keywords


object tracking, Faster-RCNN, individual pig, target occlusion

Full Text:

PDF

References


Li Y Y, Sun L Q, Zou Y B, Li Y. Individual pig object detection algorithm based on Gaussian mixture model. Int J Agric & Biol Eng, 2017; 10(5): 186–193.

Yu S. Tracking algorithm based on multi-feature detection and target association of pigs on large-scale pig farms. Journal of Information & Computational Science, 2015; 12(10): 3837–3844.

Sun L, Li Z, Duan Q, Sun X, Li J. Automatic monitoring of pig excretory behavior based on motion feature. Sensor Letters, 2014; 12(3): 673–677.

Porto S M C, Arcidiacono C, Anguzza U, Cascone G. A computer vision-based system for the automatic detection of lying behavior of dairy cows in free-stall barns. Biosystems Engineering, 2013; 115(2):184–194.

Zuo S, Jin L, Chung Y, Park D. An index algorithm for tracking pigs in pigsty. International Conference on Industrial Electronics and Engineering, 2015; pp.797–804.

Sanchez-Matilla R, Poiesi F, Cavallaro A. Online multi-target tracking with strong and weak detections. European Conference on Computer Vision, 2016; pp.84–99.

Kamal A T, Bappy J, Farrell J, Roychowdhury A. Distributed multi-target tracking and data association in vision networks. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016; 38(7): 1397–1410.

Lecun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015; 521(7553): 436.

Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012; pp.1097–1105.

Sun Y, Wang X, Tang X. Deeply learned face representations are sparse, selective, and robust. Computer Vision and Pattern Recognition. IEEE, 2015; pp.2892–2900.

Zhou Y C, Xu T Y, Zheng W, Deng H B. Classification and recognition approaches of tomato main organs based on DCNN. Transactions of the CSAE, 2017; 33(15): 219–226. (in Chinese)

Ma C, Huang J B, Yang X, Yang M H. Hierarchical convolutional features for visual tracking. IEEE International Conference on Computer Vision. IEEE, 2015; pp.3074–3082.

Andriyenko A, Roth S, Schindler K. An analytical formulation of global occlusion reasoning for multi-target tracking. IEEE International Conference on Computer Vision Workshops, 2011; pp.1839–1846.

Dong X, Shen J, Yu D, Wang W, Liu J, Huang H. Occlusion-Aware Real-Time Object Tracking. IEEE Transactions on Multimedia, 2017; 19(4):763–771.

Hua Y, Alahari K, Schmid C. Occlusion and motion reasoning for long-term tracking. Computer Vision – ECCV 2014. Springer International Publishing, 2014; pp.172–187.

Yang H, Alahari K, Schmid C. Online object tracking with proposal selection. IEEE International Conference on Computer Vision. IEEE, 2015; pp.3092–3100.

Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Tech Report (v5), UC Berkeley, 2013; pp.580–587.

Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell, 2015; 39(6): 91–99.

Uijlings J R, Sande K E, Gevers T, Smeulders A W M. Selective search for object recognition. International Journal of Computer Vision, 2013; 104(2): 154–171.

He W, Yamashita T, Lu H, Lao S. SURF Tracking. IEEE International Conference on Computer Vision, ICCV 2009, Kyoto, Japan, September 27 - October. DBLP, 2009; pp.1586–1592.

Takita A, Ogawa S, Saji H. Vehicle motion tracking from infrared images using background subtraction method. 16th ITS World Congress and Exhibition on Intelligent Transport Systems and Services, 2009.

Han S, Mao H, Dally W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. Fiber, 2015; 56(4): 3–7.

Bay H, Ess A, Tuytelaars T, Gool L V. Speeded-up robust features (SURF). Computer Vision & Image Understanding, 2008; 110(3): 346–359.

Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004; 60(2): 91–110.

Niedermeier R, Sanders P. On the manhattan-distance between points on space-filling mesh-indexings. Technical Report iratr-1996-18, Universität Karlsruhe, Informatik für Ingenieure und Naturwissenschaftler, 1996.

Abadi M. TensorFlow: Learning functions at scale. Acm Sigplan Notices, 2016; 51(9): 1–1.

Wu Y, Lim J, Yang M H. Online object tracking: A benchmark. IEEE Conference on Computer Vision and Pattern Recognition, 2013; pp.2411–2418.




Copyright (c) 2018



2023-2026 Copyright IJABE Editing and Publishing Office