This paper gives an overview of our five years project on Cooperative Distributed Vision (CDV, in short). The project was started from October 1996 under the support of the Research for the Future Program, the Japan Society for the Promotion of Science.
From a practical point of view, the goal of CDV is summarized as follows (Fig. 1):
Embed in the real world a group of network-connected Observation Stations (real time image processor with active camera(s)) and mobile robots with vision, and realize
Figure 1: Cooperative distributed vision. It may be called Ubiquitous Vision
Applications of CDV include
The aim of the project is not to develop these specific application systems but to establish scientific and technological foundations to realize CDV systems enough capable to work persistently in the real world.
From a scientific point of view, we put our focus upon Integration of Perception, Action, and Communication (Fig. 2). That is, the scientific goal of the project is to investigate how these three functions should be integrated to realize intelligent systems; we believe that intelligence does not dwell solely in brain but emerges from active interactions with environments through Perception, Action, and Communication.
Figure 2: Intelligence emerges from the integration of perception,
action, and communication.
From a technological point of view, we will design and develop hardwares and softwares to embody these three functions:
In this paper, we first give a brief review of the computer vision research to show the background of CDV. Then, we discuss functionalities of and mutual dependencies among perception, action, and communication to formally clarify the meaning of their integration. While the formalization and discussion are not enough developed yet, several interesting observations are derived. In the latter half of the paper, we discuss technical issues to implement CDV systems with several preliminary experimental results. The discussion covers a wide spectrum of technological research issues ranging from vision sensors for real time depth sensing to cooperative behavior learning algorithms for vision-based mobile robots.