Vision-based Autonomous UAV Inspection for Bridges
Please login to view abstract download link
Although recent advances have been widely gained in UAV-based visual inspection for bridges, they still highly rely on time-labor-intensive hand-crafted route planning, facing challenges under large-scale beam bottom scenarios with limited RTK signals. In addition, the interaction between visual inspection targets of multi-type component with surface damage and the unmanned inspection system is insufficient. To address these issues, this study investigates a vision-based autonomous UAV inspection framework for bridges. First, a real-time high-accuracy SLAM method is propsoed based on multi-modal fusion of images, point clouds, and inertial measurement units. Then, a universal detection and segmentation model for multi-type structural components and surface damage is established for accurate inspection target recognition. Third, an automatic damage mapping, structural reconstruction, and scenario rendering method is established based on an improved neural radiance field model with multi-perspective consistency. Finally, an interaction strategy is constructed based on the integration of multi-modal sensing data, 3D reconstruction model, digital environment, inspection target, navigation action, and decision agent using deep reinforcement learning for real-time updating of UAV flight path. Validation experiments are performed both in the ROS simulation environment and using real-world bridge inspection data.