Journal Title

Journal ISSN

Volume Title



Change detection from video recordings is critical in many applications related to surveillance, medical diagnosis, remote sensing, condition assessment, motion segmentation, and advanced driver assistance systems. The main goal of the change detection using videos is to identify the set of pixels that are significantly different between spatially aligned images that are temporally separated. This is an extremely challenging problem because of a variety of factors, including changes in the illumination over time, appearance or disappearance of objects in the scene, and the need for temporal synchronization of the videos. Moreover, when a mobile video acquisition platform is used, a change in scale of the observed scene along with rotation and translation changes between image pairs is introduced. Thereby, the imaging geometry cannot be modeled by ordinary transform constraints because of the varying field-of-view. Over the years, many standard image processing techniques have been leveraged to realize a solution to the problem of change detection. Each potential approach attempts to exploit properties of the image, the application domain, or a combination. The relevance of the kind of changes to be detected is application-specific, but the underlying algorithms need to detect all changes as the first step, which can later be post-processed to discriminate between relevant and unimportant changes. It would be beneficial to have a framework that analyzes the changes between videos in an automated manner. In this dissertation, we explore more complex imaging models for solving the change detection task and propose a complete framework that accomplishes spatiotemporal registration and change detection. To this end, we develop a set of methods for: 1) temporal alignment of the unsynchronized videos, 2) estimation and refinement of the disparity maps using temporal consistencies, 3) segmentation of the dominant plane in the scene, 4) estimation of spatial transform for the dominant plane, and 5) detection of relevant changes in the presence of several altering background elements. To demonstrate the feasibility of the proposed methods, we carried out extensive experiments using videos obtained from various sources and present visual and quantitative results that address: 1) temporal alignment of video pairs recorded by mobiles platforms under varying illumination and scene conditions, 2) scene depth estimation, dominant plane segmentation, and change detection between videos captured by moving sensors where the complicated geometry and parallax are present, and 3) detection of relevant changes in videos acquired by stationary cameras where the environment contains several dynamic regions.



Video matching, Change-detection, Video synchronization, Disparity estimation, Image registration