A Convolutional Approach to Early Detection and Classification of Tomato Foliar Pathogens

George Princess Thomas

Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India

Poovammal Easwaran

Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India

Heartlin Maria Hermas

Department of ECE, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India

Kothai Ganesan

Department of CSE (Artificial Intelligence and Machine Learning), KPR Institute of Engineering and Technology, Coimbatore 641407, India

DOI: https://doi.org/10.36956/ia.v1i1.1486

Received: 19 November 2024 | Revised: 15 March 2025 | Accepted: 23 March 2025 | Published Online: 29 March 2025

Copyright © 2025 George Princess Thomas, Poovammal Easwaran, Heartlin Maria Hermas, Kothai Ganesan. Published by Nan Yang Academy of Sciences Pte. Ltd.

Creative Commons LicenseThis is an open access article under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License.


Abstract

Food security remains a critical global concern. The rising world population has led to a continuous increase in foGlobal food security relies significantly on the agricultural sector, with tomatoes being a vital dietary component worldwide. However, various diseases pose an ongoing threat to tomato crop yield and quality. Prompt and accurate identification of these diseases is crucial for sustainable agriculture and effective management practices. This study introduces an innovative approach using Convolutional Neural Networks (CNNs) to enable rapid detection and classification of tomato leaf diseases through image analysis. The system utilizes a high-resolution dataset comprising images of tomato leaves showing symptoms of common diseases such as bacterial wilt, early blight, and late blight. Before training, the dataset undergoes preprocessing to enhance image clarity and eliminate noise, followed by division into training and testing subsets. A custom CNN architecture is developed and trained to automatically learn and extract hierarchical features from the images. Additionally, transfer learning methods are explored to improve the model’s efficiency and generalization. The model’s performance is evaluated using various metrics including accuracy, precision, recall, and F1 score. Results indicate that the CNN model demonstrates high accuracy and robustness in early disease detection. This approach holds substantial potential for practical implementation, offering farmers and agricultural professionals a powerful tool for timely and precise disease management. By enabling targeted responses and supporting precision agriculture, the proposed method represents a significant advancement in integrating modern technology with sustainable farming, ultimately contributing to agricultural stability and global food security.

Keywords: Importing Libraries and Datasets; Tomato Disease Classification; Convolutional Neural Networks (CNN); Transfer Learning; Data Augmentation; Pre-Processing and Feature Extraction; Testing and Training; Prevention; Recommendation of Pesticides


References

[1] Chicco, D., Jurman, G., 2022. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genomics. 21(1), 1–13.

[2] Mohanty, S.P., Hughes, D.P., Salathé, M., 2016. Using deep learning for image-based plant disease detection. Frontiers in Plant Science. 7, 1419.

[3] Kaur, S., Pandey, S., Goel, S., 2019. Plants disease identification and classification through leaf images: A survey. Archives of Computational Methods in Engineering. 26(2), 507–530.

[4] Farmanbar, M., Toygar, Ö., 2015. A hybrid approach for person identification using palmprint and face biometrics. International Journal of Pattern Recognition and Artificial Intelligence. 29(6), 1556009.

[5] Taheri, S., Toygar, Ö., 2018. Animal classification using facial images with score-level fusion. IET Computer Vision. 12(5), 679–685.

[6] Lui, K.-F., Chan, Y.-H., Leung, M.-F., 2021. Modelling of destinations for data-driven pedestrian trajectory prediction in public buildings. Proceedings of 2021 IEEE International Conference on Big Data (Big Data); December 15–December 18, 2021; Orlando, FL, USA. pp. 1709–1717.

[7] Lee, S.H., Goeau, H, Bonnet, P, et al., 2020. New perspectives on plant disease characterization based on deep learning. Computers and Electronics in Agriculture. 170, 105220.

[8] Nazki, H., Yoon, S., Fuentes, A., et al., 2020. Unsupervised image translation using adversarial networks for improved plant disease recognition. Computers and Electronics in Agriculture. 168, 105117.

[9] Ferentinos, K.P., 2018. Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture. 145, 311–318.

[10] Agarwal, M., Gupta, S.K., Biswas, K.K., 2020. Development of efficient CNN model for tomato crop disease identification. Sustainable Computing: Informatics and Systems. 28, 100407.

[11] Chen, X., Zhou, G., Chen, A., et al., 2020. Identification of tomato leaf diseases based on combination of ABCK-BWTR and B-ARNet. Computers and Electronics in Agriculture. 178, 105730.

[12] Wspanialy, P., Moussa, M., 2020. A detection and severity estimation system for generic diseases of tomato greenhouse plants. Computers and Electronics in Agriculture. 178, 105701.

[13] Jones, C.D., Jones, J.B., Lee, W.S., 2010. Diagnosis of bacterial spot of tomato using spectral signatures. Computers and Electronics in Agriculture. 74(2), 329–335.

[14] Borges, D.L., de M. Guedes, S.T.C., Nascimento, A.R., et al., 2016. Detecting and grading severity of bacterial spot caused by Xanthomonas spp. in tomato (solanum lycopersicon) fields using visible spectrum images. Computers and Electronics in Agriculture. 125, 149–159.

[15] Barbedo, J.G.A., 2018. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Computers and Electronics in Agriculture. 153, 46–53.

[16] Szegedy, C., Liu, W., Jia, Y., et al., 2015. Going deeper with convolutions. Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 7–June 12, 2015; Boston, MA, USA. pp. 1–14.

[17] Szegedy, C., Vanhoucke, V., Ioffe, S., et al., 2016. Rethinking the inception architecture for computer vision. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 27–June 30, 2016; Las Vegas, NV, USA. pp. 2818–2826.

[18] Szegedy, C., Ioffe, S., Vanhoucke, V., et al., 2017. Inception-v4, inception-ResNet and the impact of residual connections on learning. Proceedings of The Thirty-First AAAI Conference on Artificial Intelligence; February 4–February 9, 2017; San Francisco, CA, USA. pp. 1–7.

[19] He, K., Zhang, X., Ren, S., et al., 2016. Deep residual learning for image recognition. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 27–June 30, 2016; Las Vegas, NV, USA. pp. 770–778.

[20] He, K., Zhang, X., Ren, S., et al., 2016. Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.). Computer Vision–ECCV 2016. Springer: Cham, Switzerland. pp. 630–645.

[21] Simonyan, K., Zisserman, A., 2015. Very deep convolutional networks for large-scale image recognition. Proceedings of 3rd International Conference on Learning Representations (ICLR); May 7–May 9, 2015; San Diego, CA, USA. pp. 1–14.

[22] Howard, G., Zhu, M., Chen, B., et al., 2017. MobileNets: Efficient convolutional neural networks for mobile vision applications. Available from: https://arxiv.org/abs/1704.04861 (cited 20 April 2017).

[23] Sandler, M., Howard, A., Zhu, M., et al., 2018. MobileNetV2: Inverted residuals and linear bottlenecks. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; June 18–June 23, 2018; Salt Lake City, UT, USA. pp. 4510–4520.

[24] Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 21–July26, 2017; Honolulu, HI, USA. pp. 1251–1258.

[25] Papers With Code, 2021. Image Classification on ImageNet. Available from: https://paperswithcode.com/sota/image-classification-on-imagenet (cited 2 January 2021).

[26] Hughes, D.P., Salathe, M., 2015. An open access repository of images on plant health to enable the development of mobile disease diagnostics. Available from: https://arxiv.org/abs/1511.08060 (cited 15 January 2021).

[27] Bishop, C.M., 2006. Pattern Recognition and Machine Learning. Springer: New York, NY, USA. pp. 123–145.

[28] Ng, A., 2019. Machine Learning Yearning: Technical Strategy for AI Engineers in the Era of Deep Learning. Available from: https://nessie.ilab.sztaki.hu/~kornai/2020/AdvancedMachineLearning/Ng_MachineLearningYearning.pdf (cited 10 January 2018).

[29] Billings, S., Wei, H.-L., 2019. NARMAX model as a sparse, interpretable and transparent machine learning approach for big medical and healthcare data analysis. Proceedings of 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS); August 10–August 12, 2019; Zhangjiajie, China. pp. 2743–2750.