This paper presents a system that performs the recovery of camera
motion parameters and the segmentation of mobile objects in video
documents for content indexing.
Two different methods are used for the recovery of the camera motion
(relatively to the main background), the first for a camera maintained
at a fixed location with rotational and zoom degrees of freedom, and
the second for a camera of arbitrary motion but assuming a fixed focal
length. The first method is based on the search of an optimal projective
transform between consecutive images combined with an iterative background
/ mobile objects segmentation process. The second method is based on a
paraperspective factorization method for shape and motion recovery. Both
methods rely on the use of a dense and high-quality matching between
consecutive images (optical flow). The system also attempts to classify
shots or sub-segments of shots into one of the following categories of ``no
motion'', ``non mobile camera motion'', ``mobile camera motion'' or
``other type of motion''. Further subcategorization can be done for
each recovered type. Results are presented using sequences extracted
from document 8 of the ISIS GDR-PRC GT10/AIM corpus.