This paper describes the first steps of CLIPS/IMAG on the TREC video story segmentation task. We mostly describe the multi-modal features used and their respective performance for the story segmentation task. These features are based on the audio, video and text modalities. The preliminary system, which has the advantage to be relatively free with respect to the use of training data, is also presented in this paper. First experiments on the TRECVID 2003 evaluation set lead to a recall rate of 0.613 and a precision rate of 0.467.