International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Solemate: AI-Powered Smart Assistive Navigation System
ABSTRACT SoleMate: An AI-Powered Smart Assistive Navigation System for the Visually Impaired and Elderly enhances mobility, safety, and independence in daily environments. Navigating complex surroundings poses significant risks, including undetected obstacles and potential falls. This system addresses these challenges by integrating real- time computer vision, motion sensing, and cloud-based monitoring into a cohesive mobile application built with the Flutter framework. Unlike traditional navigation aids that rely solely on GPS or manual touch, SoleMate utilizes Deep Learning models for object detection and spatial awareness, providing users with immediate haptic and auditory feedback. By leveraging Internet of Things (IoT) principles and sensor fusion, the platform detects elevation changes and hazards, while a dedicated cloud backend allows caregivers to monitor user safety through real-time location tracking and fall alerts. This solution focuses on the user, providing a reliable, scalable, and intuitive interface that bridges the gap between digital intelligence and physical navigation.
AUTHOR DR. B. KALPANA, R. HEMAVATHY, S. ISAIVASHINI, V. KARTHIGA, P. KAVIPRIYA Associate Professor, Department of Information Technology, R.M.D. Engineering College, Chennai, Tamil Nadu, India Student, Department of Information Technology, R.M.D. Engineering College, Chennai, Tamil Nadu, India
VOLUME 182
DOI DOI: 10.15680/IJIRCCE. 2026.1403039
PDF pdf/39_Solemate AI-Powered Smart Assistive Navigation System.pdf
KEYWORDS
References [1] Redmon, J., et al. "You Only Look Once: Unified, Real-Time Object Detection." (The foundational architecture for the YOLOv8/v11 models used in your obstacle detection and real-time hazard recognition).
[2] Scherer, R., et al. "Multimodal Feedback to Support the Navigation of Visually Impaired People." (Supports your use of Multimodal Feedback, proving that combining haptic vibrations with spatial audio cues reduces cognitive load).
[3] Mubashir, M., et al. "A Survey on Fall Detection: Principles and Approaches." (The basis for your Fall & Emergency Detection logic, utilizing tri-axial accelerometer data and sensor fusion to identify emergency events).
[4] Budai, M. "Mobile Content Accessibility Guidelines on the Flutter Framework." (Supports your use of the Flutter Tech Stack, focusing on building high-performance, accessible interfaces for users with visual impairments).
[5] Acharya, V., et al. "Real-time Scene Description for the Visually Impaired using Large Vision-Language Models." (The foundation for integrating Google Gemini 1.5 Flash to provide descriptive audio summaries of surroundings).
image
Copyright © IJIRCCE 2020.All right reserved