Visual-Attention Based background Modelling for Detecting Infrequently Moving Objects

Visual-Attention Based background Modelling for Detecting Infrequently Moving Objects

Abstract

Motion is one of the most important cues to sep- arate foreground objects from background in a video. Using a stationary camera, it is usually assumed that the background is static while the foregroundobjects are moving most of the time. However, in practice, the foreground objects may show infrequent motions, such as abandoned objects and sleeping persons. Meanwhile, the background may contain frequent local motions, such as waving trees and/or grass. Such complexities may prevent the existingbackground subtraction algorithms from correctly identifying the foreground objects. In this paper, we propose a new approach that can detect the foreground objects with frequent and/or infrequent motions. Specifically, we use a visual attention mechanism to infer a complete background from a subset of frames and then propagate it to the other frames for accurate background subtraction. Furthermore, we develop a feature-matching based local motion stabilization algorithm to identify frequent local motions in the background for reducing false positives in the detected foreground. The proposed approach is fully unsupervised without using any supervised learning for object detection and tracking. Extensive experiments on a large amount of videos have demonstrated that the proposed approach outperforms state-of-the-art motion detection and background subtraction methods in comparison.


Comments are closed.