International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE AI-Powered Vision System for Automated Quality Grading using Image Recognition
ABSTRACT The AI-powered vision system for automated quality grading of agricultural produce uses image recognition techniques to analyze fruits and vegetables based on their size, color, shape, and surface defects. The system captures images using a camera and processes them using image processing and machine learning algorithms. Features such as color uniformity, texture patterns, and defect detection are extracted and compared with trained models to classify produce into different quality grades. This automated grading system reduces manual effort, increases accuracy, and ensures consistent quality assessment in agricultural industries.
AUTHOR D. TEJO VAMSI KRISHNA, G. SRI LAKSHMI DURGA, T. HEMA SRI, CH. KIRAN, K.G.V. NAGESWARA RAO U.G. Student, Department of ECE, SVIET Engineering College, Nandamuru, Pedana, Andhra Pradesh, India Assistant Professor, Department of ECE, SVIET Engineering College, Nandamuru, Pedana, Andhra Pradesh, India
VOLUME 182
DOI DOI: 10.15680/IJIRCCE.2026.1403130
PDF pdf/130_AI-Powered Vision System for Automated Quality Grading using Image Recognition.pdf
KEYWORDS
References [1] J. Yao, X. Zhu, F. Zhu, and J. Huang, “Deep cor relational learning for survival prediction from multi modality data,” in MICCAI. Springer, 2017, pp. 406 414.
[2] L. A. Vale-Silva and K. Rohr, “Long-term cancer survival prediction using multimodal deep learning,” Scientific Reports, vol. 11, no. 1, pp. 1–12, 2021.
[3] J. P. Ioannidis et al., “Microarrays and molecular re search: noise discovery?” Lancet, vol. 365, no. 9458, pp. 454–454, 2005.
[4] W. N. Van Wieringen, D. Kun, R. Hampel, and A.-L. Boulesteix, “Survival prediction using gene expression data: a review and comparison,” Computational statistics & data analysis, vol. 53, no. 5, pp. 1590–1603, 2009.
[5] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in NIPS, 2017, pp. 5998 6008.
[6] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weis senborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Min derer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021. 0.724 0.692 0.678 0.64 OV
[7] Z. Huang, H. Chai, R. Wang, H. Wang, Y. Yang, and H. Wu, “Integration of patch features through self supervised learning and transformer for survival analysis on whole slide images,” in MICCAI, in press.
[8] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE CVPR, 2016, pp. 770–778.
[9] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” NAACL-HLT, pp. 4171–4186, 2019.
[10] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Icml, 2010.
[11] S. Yao and X. Wan, “Multimodal transformer for mul timodal machine translation,” in ACL, 2020, pp. 4346 4350.
[12] D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.
image
Copyright © IJIRCCE 2020.All right reserved