Next: Cooperative Tracking of a
Up: Real Time Object Motion
Previous: Real Time Object Motion
In the prototype system, each observation station is equipped with an
APS camera (implemented by SONY EVI-G20). So far we developed the
following algorithms for object detection and tracking and behavior
recognition.
- Appearance Based Object Detection and Tracking
- :
The APS camera first generates a panoramic background image by
changing pan, tilt, and zoom parameters. Then, it conducts the subtraction
between a live input video image and its corresponding background
sub-image, which is synthesized from the panoramic background image
using the current pan, tilt, and zoom parameters. Analyzing the
subtracted image, objects can be detected as anomalous regions and the
camera parameters are controlled to track and focus on the target object.
Experiments in the real world scene demonstrated practical utilities
and efficiency of our APS camera. (See [10] for
technical details.)
- Object Behavior Recognition by Selective Attention:
We developed a system (Fig. 14) to recognize object
behaviors by a fixed camera. In the object model learning phase, a
temporal sequence of anomalous regions are extracted by applying the
background subtraction to input video images (Fig.
15). Then, the system constructs a
nondeterministic finite automaton (NFA, in short) model from a set of
such sequences representing the same object behavior (e.g. entering
from a door). Each state of NFA represents an intermediate stage of
the behavior and records a focusing region to verify if an object in a
current input image stays at that stage. If it is verified, that
state is activated. When such state activation is propagated to the
final state, the system recognizes the object behavior represented by
the NFA model. By using a group of NFA models representing different
object behaviors, we can classify the object behavior captured by a
video camera. (See [10] and
[11] for technical details.) Currently we are
improving the system by introducing multiple observation stations with
fixed cameras and communication mechanisms between them.
Figure 14: Behavior recognition model.
Figure 15: A series of anomalous regions.
Next: Cooperative Tracking of a
Up: Real Time Object Motion
Previous: Real Time Object Motion