DEVELOPMENT OF AN AI-POWERED ASSISTIVE WEARABLE PROTOTYPE ENABLING VISUAL INTERPRETATION AND CONTEXTUAL VERBAL FEEDBACK
Keywords:
Artificial intelligence, Assistive wearable technology, Visual impairment, Edge computing, Context-aware verbal feedbackAbstract
Visual impairment significantly restricts independent mobility and situational awareness, particularly in environments where access to advanced assistive technologies is limited by cost and infrastructure. This study presents the development of an AI-powered assistive wearable prototype designed to enhance environmental understanding through real-time visual interpretation and contextual verbal feedback. The proposed system combines on-device visual perception with spoken interaction to detect obstacles, recognize everyday objects, and convey relevant information to users in an intuitive and context-aware manner. Emphasis is placed on low-cost hardware selection, energy efficiency, and wearable form factor to ensure practicality for daily use. A user-centered design approach guides the development process, followed by iterative prototyping and optimization of lightweight AI models for real-time performance. The prototype is evaluated through controlled experiments and real-world testing with visually impaired participants to assess accuracy, usability, and overall effectiveness. Results indicate that the system improves navigation confidence and environmental awareness while maintaining affordability and responsiveness. The proposed prototype demonstrates the potential of intelligent, context-aware wearable assistance to support greater independence and quality of life for visually impaired individuals.













