RoboSub Autonomous Underwater Vehicle Club

Overview

RoboSub is a competition where autonomous underwater vehicles complete navigation, perception, and manipulation tasks. In my junior year, I became president of my university's RoboSub club, leading our team to its first top-10 placement, 7th out of 59 teams in the international RoboSub competition. I was also a large software contributor, writing the vehicle's state machine and integrating our six-DoF PID control framework with a CNN bounding box detection model to solve tasks, showcased in the following video.

Competition Highlight

The sub autonomously makes contact with the first buoy and the backside of the second buoy

Link to full video: YouTube

Gallery

Our Approach

Our software stack was written in Python ROS(1) with C++ nodeleted code (zero-copy) for the image processing pipeline. We used SMACH state machine to organize mission logic. Our task solutions primarily relied on the sub navigating to an estimated prior location until a camera detection was made. Afterwards, we would adjust setpoints for the PID loops based on the bounding box locations from the CNN model until the correct position was reached to complete the task (eg making contact with a buoy or dropping a torpedo into a bin).

Our vehicle featured 7 cameras granting 360 degree lateral and downward vision. Our state estimation relied on an Extended Kalman Filter that fused IMU data, compass data, pressure depth data, and velocity data from a Doppler Velocity Logger. The vehicle had 6 thrusters for six-DoF control which ran a PID loop on all axes based on the state estimate. We also had a dedicated NVIDIA GPU for running the CNN model.

Team photo

POV My Undergraduate Life

Clip of me developing the sub's mission logic for the buoy and droppers tasks in the UUV Gazebo simulator.