Model for the online frontal image selection of silkworm pupae using machine vision
Keywords:
agricultural engineering?computer visionAbstract
The sorting of male and female silkworm pupae is an essential process of silkworm breeding, with its accuracy directly affecting the quality of hybrid silkworm eggs and silk. Gonadal characteristics serve as a reliable basis for sex identification in silkworm pupae; however, the gonads only exist on the positive side of the tail. Due to the unique geometry of silkworm pupae, online sex recognition based on machine vision requires flipping and taking many photos of the same silkworm pupae. Thus, accurately selecting the frontal image from multiple images of the same silkworm pupae in different poses is a prerequisite for subsequent sex identification. To address this challenge, we proposed SPNet-GS (Silkworm Pupae Network for Gonad Selection), a lightweight model for online selection of frontal silkworm pupae images. The model first employed a large kernel convolution to enhance the receptive field and capture the relevant information between adjacent pixels. Then the correlation between long-distance pixels under multi-scale information can be obtained by dilated convolutions. Finally, the correlation information between near and far pixels was fused to enhance feature extraction. Experimental results demonstrated that our method outperforms other models with an average accuracy of 98.41% and an average F1 score of 99.02%. The average inference time of each image was 0.03 s, which can fully meet the requirements of online selection of male and female silkworm pupae. Moreover, the gender identification accuracy rates using the selected frontal image and gonad region image reached 84.68% and 94.58%, respectively. These results were 10% and 19.90% higher than using multi-pose images for sex identification, demonstrating the effectiveness of the frontal image selection strategy. The findings of this investigation may provide a valuable reference for the machine vision-based intelligent online sorting of silkworm pupae by gender.
Keywords: frontal image selection, gonad image, large kernel convolution, sex identification of silkworm pupae
DOI: 10.25165/j.ijabe.20261901.8409
Citation: Guo F, Li J, Qin W, Zhao C J, Li G L. Model for the online frontal image selection of silkworm pupae using machine vision. Int J Agric & Biol Eng, 2026; 19(1): 163–169.
References
[1] Ma Y, Xu Y, Yan H, Zhang G. On-line identification of silkworm pupae gender by short-wavelength near infrared spectroscopy and pattern recognition technology. Journal of Near Infrared Spectroscopy, 2021; 29(4): 207–215.
[2] He H, Zhu S, She L, Chang X, Wang Y, Zeng D, et al. Integrated analysis of machine learning and deep learning in silkworm pupae (Bombyx mori) species and sex identification. Animals, 2023; 13(23): 3612. doi: 10.3390/ani13233612.
[3] Sumriddetchkajorn S, Kamtongdee C. Optical penetration-based silkworm pupa gender sensor structure. Applied Optics, 2012; 51(4): 408–412.
[4] Kamtongdee C, Sumriddetchkajorn S, Sa-ngiamsak C. Feasibility study of silkworm pupa sex identification with pattern matching. Computers and Electronics in Agriculture, 2013; 95: 31–37.
[5] Sumriddetchkajorn S, Kamtongdee C, Chanhorm S. Fault-tolerant optical-penetration-based silkworm gender identification. Computers and Electronics in Agriculture, 2015; 119: 201–208.
[6] Kamtongdee C, Sumriddetchkajorn S, Chanhorm S, Kaewhom W. Noise reduction and accuracy improvement in optical penetration-based silkworm gender identification. Applied Optics, 2015; 54(7): 1844–1851.
[7] Liu C, Ren Z H, Wang H Z, Yang P Q, Zhang X L. Analysis on gender of silkworms by MRI technology. In: 2008 International Conference on BioMedical Engineering and Informatics, IEEE: Sanya, China, 2008; pp.8-12. doi: 10.1109/BMEI.2008.49.
[8] Cai J, Yuan L, Liu B, Sun L. Nondestructive gender identification of silkworm cocoons using X-ray imaging with multivariate data analysis. Analytical Methods, 2014; 6(18): 7224–7233.
[9] Raj A N J, Sundaram R, Mahesh V G V, Zhuang Z M, Simeone A. A multi-sensor system for silkworm cocoon gender classification via image processing and support vector machine. Sensors, 2019; 19(12): 2656.
[10] Lin X, Zhuang Y, Tao D, Li G L, Yang X D, Song J, et al. The model updating based on near infrared spectroscopy for the sex identification of silkworm pupae from different varieties by a semi-supervised learning with pre-labeling method. Spectroscopy Letters, 2019; 52(10): 642–652.
[11] Tao D, Wang Z R, Li G L, Xie L. Sex determination of silkworm pupae using VIS-NIR hyperspectral imaging combined with chemometrics. Spectrochimica Acta Part A, Molecular and Biomolecular Spectroscopy, 2019; 208: 7–12.
[12] Qiu G Y, Tao D, Xiao Q, Li G L. Simultaneous sex and species classification of silkworm pupae by NIR spectroscopy combined with chemometric analysis. Journal of the Science of Food and Agriculture, 2021; 101(4): 1323–1330.
[13] Hu C H, Shi Z F, Wei H L, Hu X D, Xie Y N, Li P P. Automatic detection of pecan fruits based on Faster RCNN with FPN in orchard. Int J Agric & Biol Eng, 2022; 15(6): 189–196.
[14] Wang T, Chen B, Zhang Z, Li H, Zhang M. Applications of machine vision in agricultural robot navigation: A review. Computers and Electronics in Agriculture, 2022; 198: 107085.
[15] Tao D, Wang Z R, Li G L, Qiu G Y. Radon transform-based motion blurred silkworm pupa image restoration. Int J Agric & Biol Eng, 2019; 12(2): 152–159.
[16] Guo F, He F, Tao D, Li G. Automatic exposure correction algorithm for online silkworm pupae (Bombyx mori) sex classification. Computers and Electronics in Agriculture, 2022; 198: 107108.
[17] OpenCV. Achieve: https://opencv.org. Accessed on [2023-05-05].
[18] Chollet F. Xception: Deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE: Honolulu, HI, USA, 2017; pp.1800–1807. doi: 10.1109/CVPR.2017.195.
[19] Howard A G, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv, 2017; arXiv: 1704.04861. doi: 10.48550/arXiv.1704.04861.
[20] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE: Las Vegas, 2016; pp.770–778. doi: 10.1109/CVPR.2016.90.
[21] Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv, 2016. doi: 10.48550/arXiv.1511.07122.
[22] Ding X, Zhang X, Han J, Ding G. Scaling up your kernels to 31×31: Revisiting large kernel design in CNNs. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE: New Orleans, LA, USA, 2022; pp.11953–11965. doi: 10.1109/CVPR52688.2022.01166.
[23] Hendrycks D, Gimpel K. Gaussian error linear units (GELUs). arXiv preprint arXiv: 2020. doi: 10.48550/arXiv.1606.08415.
[24] PyTorch. Achieve: https://pytorch.org. Accessed on [2023-05-04].
[25] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv, 2015. doi: 10.48550/arXiv.1412.6980.
[26] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017; 60(6): 84–90.
[27] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint, 2014. doi: 10.48550/arXiv.1409.1556
[28] Huang G, Liu Z, Maaten L V D, Weinberger K Q. Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017; pp.2261–2269. doi: 10.1109/CVPR.2017.243.
[29] Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L C. MobileNetV2: Inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE: Salt Lake City, USA, 2018; pp.4510–4520. doi: 10.1109/CVPR.2018.00474.
[30] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE: Boston, 2015; pp.1–9. doi: 10.1109/CVPR.2015.7298594.
[31] Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv preprint, 2021. doi: 10.48550/arXiv.2010.11929.
[32] Selvaraju R R, Das A, Vedantam R, Cogswell M, Parikh D, Batra D. Grad-CAM: Why did you say that? Visual explanations from deep networks via gradient-based localization. arXiv preprint arXiv: 2016. doi: 10.48550/arXiv.1610.02391.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 International Journal of Agricultural and Biological Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.
IJABE is an international peer reviewed, open access journal, adopting Creative Commons Copyright Notices as follows.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).