International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Implementation of FlowML: A Distributed AutoML Platform with Integrated MLOps for Scalable Machine Learning
ABSTRACT The demand for machine learning solutions is outpacing the availability of expert practitioners. Automated Machine Learning (AutoML) systems address this by automating the model development lifecycle, but are often constrained by the computational limits of a single machine. This paper presents FlowML, a novel, web-based platform designed to democratize and scale AutoML through a distributed computing architecture. FlowML integrates a modern web interface with a robust backend and a Celery-based task queue to parallelize model training across a dynamic cluster of heterogeneous workers. The system demonstrates significant potential for reducing training times and provides a seamless, end-to-end user experience from data upload to model analysis.
AUTHOR PROF. AMRITA SHIRODE, APURV SHARAD BHOSALE, SHUBHAM DNYANOBA ANDHALE, VEDANT NILESH DHUMAL, ARYAN PARVIJKHAN AWATI Department of Artificial Intelligence & Machine Learning, AISSMS Polytechnic, Pune, India
VOLUME 182
DOI DOI: 10.15680/IJIRCCE. 2026.1403031
PDF pdf/31_Implementation of FlowML A Distributed AutoML Platform with Integrated MLOps for Scalable Machine Learning.pdf
KEYWORDS
References [1] Feurer, M., et al. “Efficient and robust automated machine learning.” Advances in neural information processing systems 28 (2015).
[2] Olson, R. S., et al. “TPOT: A tree-based pipeline optimization tool for automating machine learning.” Extended abstracts of the 2016 CHI conference on human factors in computing systems.
[3] Akiba, T., et al. “Optuna: A next-generation hyperparameter optimization framework.” Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining.
[4] Moritz, P., et al. “Ray: A distributed framework for emerging AI applications.” 13th USENIX symposium on operating systems design and implementation (OSDI 18).
[5] Rocklin, M. “Dask: Parallel computation with blocked algorithms and task scheduling.”
Proceedings of the 14th python in science conference.
[6] “Celery: Distributed Task Queue.” Celery Project, celeryproject.org.
image
Copyright © IJIRCCE 2020.All right reserved