Real-Time Feature-Based Video Stabilization on FPGA

Real-Time Feature-Based Video Stabilization on FPGA

 Abstract:

Digital video stabilization is an important video enhancement technology that aims to remove unwanted camera vibrations from video sequences. Trading off between stabilization performance and real-time hardware implementation feasibility, this paper presents a feature-based full-frame video stabilization method and a novel complete fully pipelined architectural design to implement it on field-programmable gate array (FPGA). In the proposed method, feature points are first extracted with the oriented features from accelerated segment test and rotated binary robust independent elementary features algorithm and matched between consecutive frames. Next, the matched point pairs are fitted to the affine transformation model using a random-sample consensus-based approach to estimate inter-frame motion robustly. Then, the estimated results are accumulated to compute the cumulative motion parameters between the current and reference frames, and the translational components are smoothed by a Kalman filter representing intentional camera movement. Finally, a mosaicked image is constructed based on cumulative motion parameters using an image mosaicking technique, and then a display window is created with the desired frame size according to the computed intentional camera movement to obtain a full motion-compensated frame. Using pipelining and parallel processing strategies, the whole process has been designed using a novel complete fully pipelined architecture and implemented on Altera’s Cyclone III FPGA to build a real-time stabilization system. The experimental results have shown that the proposed system can deal with standard PAL video input including arbitrate translation and rotation and can produce full-frame stabilized output providing a better viewing experience at 22.37 ms/frame, thus achieving real-time processing performance.

 


Comments are closed.