Title: Low Contrast Visual Sensing and Inertial Navigation in GPS Denied Environments
Committee Members:
Prof. Hanumant Singh
Prof. Martin Ludvigsen
Prof. Pau Closas
Prof. Michael Everrett
Abstract:
Visual inertial navigation has shown remarkable performance in publicly available datasets, assuming certain ideal conditions such as textured scenes, uniform illumination, and static environments. However, real-world scenarios often violate these assumptions, resulting in significant visual degradation. Consequently, the classical visual navigation pipelines fail and produce erroneous results, rendering these systems ineffective for demanding field robotic missions.
This research aims to enhance the robustness of visual-inertial systems in visually degraded situations, taking a comprehensive approach from both systems and algorithm perspectives. The work encompasses two primary objectives. Firstly, it focuses on refining the characterization of MEMS-based inertial sensors and their error propagation in position, while proposing improved dead-reckoning algorithms. Secondly, it explores the performance limits of visual navigation under moderate to extreme visual degradation and investigates novel algorithms that leverage deep learning methods to bolster the visual navigation engine. To validate the efficacy of these advancements, new datasets comprising drone and underwater robot scenarios are utilized, demonstrating the applicability of this work in field robotic applications.
By addressing the limitations of existing visual-inertial navigation systems and developing robust algorithms, this research aims to significantly enhance the reliability and performance of such systems in visually degraded environments, thus expanding their potential for real-world applications in demanding field robotic missions.