1-4hit |
This paper discusses the design of configurations of videophone equipment aimed at online sign interpretation. We classified interpretation services into three types of situations: on-site interpretation, partial online interpretation, and full online interpretation. For each situation, the spatial configurations of the equipment are considered keeping the issue of nonverbal signals in mind. Simulation experiments of sign interpretation were performed using these spatial configurations and the qualities of the configurations were assessed. The preferred configurations had the common characteristics that the hearing subject could see the face of his/her principal conversation partner, that is, the deaf subject. The results imply that hearing people who do not understand sign language utilize nonverbal signals for facilitating interpreter-mediated conversation.
Toyoaki NISHIDA Kazunori TERADA Takashi TAJIMA Makoto HATAKEYAMA Yoshiyasu OGASAWARA Yasuyuki SUMI Yong XU Yasser F. O. MOHAMMAD Kateryna TARASENKO Taku OHYA Tatsuya HIRAMATSU
We describe attempts to have robots behave as embodied knowledge media that will permit knowledge to be communicated through embodied interactions in the real world. The key issue here is to give robots the ability to associate interactions with information content while interacting with a communication partner. Toward this end, we present two contributions in this paper. The first concerns the formation and maintenance of joint intention, which is needed to sustain the communication of knowledge between humans and robots. We describe an architecture consisting of multiple layers that enables interaction with people at different speeds. We propose the use of an affordance-based method for fast interactions. For medium-speed interactions, we propose basing control on an entrainment mechanism. For slow interactions, we propose employing defeasible interaction patterns based on probabilistic reasoning. The second contribution is concerned with the design and implementation of a robot that can listen to a human instructor to elicit knowledge, and present the content of this knowledge to a person who needs it in an appropriate situation. In addition, we discuss future research agenda toward achieving robots serving as embodied knowledge media, and fit the robots-as-embodied-knowledge-media view in a larger perspective of Conversational Informatics.
Kyohei YOSHIKAWA Takashi MACHIDA Kiyoshi KIYOKAWA Haruo TAKEMURA
Displaying a 3D geometric model of a user in real time is an advantage for a telecommunication system because depth information is useful for nonverbal communication such as finger-pointing and gesturing that contain 3D information. However, the range image acquired by a rangefinder suffers from errors due to image noises and distortions in depth measurement. On the other hand, a 2D image is free from such errors. In this paper, we propose a new method for a shared space communication system that combines the advantages of both 2D and 3D representations. A user is represented as a 3D geometric model in order to exchange nonverbal communication cues. A background is displayed as a 2D image to give the user adequate information about the environment of the remote site. Additionally, a high-resolution texture taken by a video camera is projected onto the 3D geometric model of the user. This is done because the low resolution of the image acquired by the rangefinder makes it difficult to exchange facial expressions. Furthermore, to fill in the data occluded by the user, old pixel values are used for the user area in the 2D background image. We have constructed a prototype of a high presence shared space communication system based on our method. Through a number of experiments, we have found that our method is more effective for telecommunication than a method with only a 2D or 3D representation.
Neal LESH Joe MARKS Charles RICH Candace L. SIDNER
In 1960, the famous computer pioneer J.C.R. Licklider described a vision for human-computer interaction that he called "man-computer symbiosis. " Licklider predicted the development of computer software that would allow people "to think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own. " More than 40 years later, one rarely encounters any computer application that comes close to capturing Licklider's notion of human-like communication and collaboration. We echo Licklider by arguing that true symbiotic interaction requires at least the following three elements: a complementary and effective division of labor between human and machine; an explicit representation in the computer of the user's abilities, intentions, and beliefs; and the utilization of nonverbal communication modalities. We illustrate this argument with various research prototypes currently under development at Mitsubishi Electric Research Laboratories (USA).