We present an monocular vision-based autonomous navigation system for a commercial quadcoptor. The quadcoptor communicates with a ground-based laptop via wireless connection. The video stream of the front camera on the drone and the navigation data measured onboard are sent to the ground station and then processed by a vision-based SLAM system. In order to handle motion blur and frame lost in the received video, our SLAM system consists of a improved robust feature tracking scheme and a relocalisation module which achieves fast recovery from tracking failure. An Extended Kalman filter (EKF) is designed for sensor fusion. Thanks to the proposed EKF, accurate 3D positions and velocities can be estimated as well as the scaling factor of the monocular SLAM. Using a motion capture system with millimeter-level precision, we also identify the system models of the quadcoptor and design the PID controller accordingly. We demonstrate that the quadcoptor can navigate along pre-defined paths in an unknown indoor environment with our system using its front camera and onboard sensors only after some simple manual initialization procedures.
Video Demo I: Hovering
Video Demo II: Square Path Following
Video Demo III: Circle Path Following
 R. Huang, P. Tan and B. M. Chen, Monocular vision-based autonomous navigation system on a toy quadcopter in unknown environments, Proceedings of the 2015 International Conference on Unmanned Aircraft Systems, Denver, USA, pp. 1260-1269, June 2015. PDF
Code and How to use
The code is available at github code
Please cite the publication  if you use the code in your research.
Note: The code is for the development of research-based applications. It could be messy and tricky to set up. I am sorry that a detailed instruction is under development. Please contact me if you are interested in the work for now.
Please feel free to contact me if you are interested in this work and encounter problems using the software.
Rui Huang, Simon Fraser University
Personal Webpage: http://www.sfu.ca/~rha55/