This paper introduces a new and original algorithm for optical flow
detection. It is based on an iterative search for a displacement field that
minimizes the L1 or L2 distance between two images.
Both images are sliced into parallel and overlapping strips.
Corresponding strips are aligned using dynamic programming exactly as 2D
representations of speech signal are with the DTW algorithm. Two passes are
performed using orthogonal slicing directions. This process is iterated in a
pyramidal fashion by reducing the spacing and width of the strips.
This algorithm provides a very high quality matching for calibrated
patterns as well as for human visual sensation. The results appears to be at
least as good as those obtained with classical optical flow detection methods.