Main Page ≫ The first International Workshop ≫ Presentations by Invited Speakers
Presentations by Invited Speakers
A Taxonomy of Social Robots | |
---|---|
Cynthia Breazeal (Massachusetts Institute of Technology, USA) |
No longer restricted to the factory floor or hazardous environments, robots are making their way into human environments. The need to perform tasks and interact with ordinary people in an appropriate manner poses new challenges and opens new opportunities for applications in the home, office, school, entertainment locales, and more. This motivates the development of capable robot creatures that are not only useful but also rewarding with which to interact. This talk addresses several core challenges towards creating socially intelligent robots that can interact with, communicate with, and learn from people in a human-like manner. |
Building an Aware Home: Technologies for the Way We May Live | |
---|---|
Irfan Essa (Georgia Institute of Technology, USA) |
I will present an overview of our ongoing research in the area of developing technologies within a residential setting (a home) that will effect our everyday living. Towards this end, I will describe the Aware Home project, situated in the Georgia Tech's Residential Laboratory, a unique living laboratory for exploration of ubiquitous computing in a domestic setting. I will specifically concentrate on the sensing and perception technologies that can enable a home environment to be aware of the whereabouts and activities of its occupants. I will discuss the computer vision and audition work we are pursuing to track and monitor the residents and the methods we are developing to recognize the activities of the residents over short and extended periods. I will also discuss the technological, design and engineering research challenges inherent in this problem domain and on our focus on awareness to help maintain independence and quality of life for an aging population. For further information and this project and details of the team leading this effort, see http://www.awarehome.gatech.edu. The Aware Home Research is funded by NSF, Georgia Research Alliance (GRA) and the consortium of industrial partners. |
Neural Mechanisms and Ecological Validity of Finite-State Song Syntax in Bengalese Finches | |
---|---|
Kazuo Okanoya (Chiba University Japan) |
Male songbirds acquire their songs through learning from father and other conspecific birds. Process and brain mechanisms for birdsong learning are very similar to language learning in humans. The Bengalese finch is a domestic from of the wild white-backed munia. Although these are the same species of birds, they sing quite different songs. Bengalese finch songs are louder, contain more note-to-note transition patterns, and more pure-tone like than white-backed munia songs. To know the extent to which this strain differences are due to cultural or innate effects, we cross-fostered chicks between these two strains and examined how songs are transmitted through generations. Results suggest that Bengalese finch chicks fostered to white-backed munias copied most of the white-backed munia type song structures while white-backed munia chicks fostered to Bengalese finches failed to copy some of the Bengalese-type song characteristics. Results suggest that this preparation could be an excellent model to study the interaction between culture and genetics, and further, to study innateness in language learning. |
Wearable Agents | |
---|---|
Thad Starner (College of Computing, Georgia Institute of Technology, USA) |
Forty years ago, pioneers such as J.C.R. Licklider and Douglas Englebart championed the idea of interactive computing as a way of creating a ``man-machine symbiosis'' where mankind would be enabled to think in ways that were previously impossible. Unfortunately, in those times, such a mental coupling was limited to sitting in front of a terminal and keying in requests. Today, wearable computers, through their small size, proximity to the body, and usability in almost any situation, may enable a more intimate form of cognition suggested by these early visions. Specifically, wearable computers may begin to act as intelligent agents during everyday life, assisting in a variety of tasks depending on the user's context. One of the main obstacles to such wearable agents is perceiving the user's environment. In this talk, I will demonstrate ways of exploiting pattern recognition techniques to create new interfaces and, conversely, using interface design to compensate for the recognition errors inherent to these techniques. |
Virtual Environments and the Global Array | |
---|---|
Thomas A. Stoffregen (University of Minnesota, USA) |
Virtual environments are an example of man-machine symbiotic systems. In a virtual environment, the user must control some actions realtive to the simulated world (e.g., naviation, manipulation). In most virtual environments the user must simultaneously control the dynamic orientation of the head and body relative to the "real" environment. Motion of the simulated environment may not require changes in head/body orientation. Similarly, shifts in head/body orientation may produce novel changes in orientation of the head, eyes, and limbs relative to the simulated environment. In order to operate effectively, the user must be able to distinguish motion relative to the simulated environment from motion relative to the real environment. I will discuss the global array, a recently identified entity that provides information sufficient to make these discriminations, and to achieve successful simultaneous control of different actions relative to different referents. Users who are sensitive to the global array will have a source of reliable perceptual information for the control of complex actions. |
SmartKom: Fusion and Fission of Speech, Gestures, and Facial Expressions | |
---|---|
Wolfgang Wahlster (German Research Center for Artificial Intelligence, Germany) |
In this talk, we present a multimodal dialogue system that combines speech, gesture, and facial expressions for input and output. SmartKom provides an anthropomorphic and affective user interface to mobile webservices through its personification of an autoanimated interface agent. Robust understanding of spontaneous speech is combined with the video-based recognition of natural gestures and facial expressions. We discuss new computational methods for the seamless integration and mutual disambiguation of multimodal input and output on a semantic and pragmatic level. SmartKom is based on the situated delegation-oriented dialogue paradigm, in which the user delegates a task to a virtual communication assistant, visualized as a life-like character on a graphical display. We describe the SmartKom architecture, the use of an XML-based mark-up language for multimodal content, and the most distinguishing features of the fully operational SmartKom system. We discuss applications of the SmartKom technology for electronic TV program guides, advanced indoor and outdoor route guidance systems, and other location- and resource-sensitive webservices, that have been developed together with our industrial partners DaimlerChrysler, Philips, Siemens, and Sony. We will show that the multimodal dialogue technology, that has been developed by the SmartKom consortium, provides the basis for human-centered, added-value 3G mobile services like UMTS. |