1-7hit |
Owen Noel Newton FERNANDO Kazuya ADACHI Uresh DUMINDUWARDENA Makoto KAWAGUCHI Michael COHEN
Our group is exploring interactive multi- and hypermedia, especially applied to virtual and mixed reality multimodal groupware systems. We are researching user interfaces to control source→sink transmissions in synchronous groupware (like teleconferences, chatspaces, virtual concerts, etc.). We have developed two interfaces for privacy visualization of narrowcasting (selection) functions in collaborative virtual environments (CVES): for a workstation WIMP (windows/icon/menu/pointer) GUI (graphical user interface), and for networked mobile devices, 2.5- and 3rd-generation mobile phones. The interfaces are integrated with other CVE clients, interoperating with a heterogeneous multimodal groupware suite, including stereographic panoramic browsers and spatial audio backends & speaker arrays. The narrowcasting operations comprise an idiom for selective attention, presence, and privacy-- an infrastructure for rich conferencing capability.
Liyanage C. DE SILVA Tsutomu MIYASATO Fumio KISHINO
Here we investigate the unique advantages of our proposed Virtual Space Teleconferencing System (VST) in the area of multimedia teleconferencing, with emphasis to facial emotion transmission and recognition. Specially, we show that this concept can be used in a unique way of communication in which the emotions of the local participant are transmitted to the remote party with higher recognition rate by enhancing the emotions using some intelligence processing in between the local and the remote participants. In other words, we can show that this kind of emotion enhanced teleconferencing systems can supersede face to face meetings, by effectively alleviating the barriers in recognizing emotions between different nations. Also in this paper we show that it is better alternative to the blurred or mosaiced facial images that one can find in some television interviews with people who are not willing to be exposed in public.
Noriaki KUWAHARA Shin-ichi SHIWA Fumio KISHINO
In order to display complicated virtual spaces in real time, such as spaces consisting of a dynamic natural scenery, we earlier proposed a method for simplifying the shape data of 3-D trees whereby the amount of shape data is efficiently reduced. The method generates tree shapes based on a fractal model according to the required level of details (LOD). By using a texture-mapping technique, we experimentally showed that our method can display 3-D tree images with allowable image quality in real time. However, methods for controlling the LOD of 3-D tree shapes in virtual spaces have yet to be discussed. In this paper, quantitative evaluations were made on the effect of a data simplification method employing such visual properties as resolution difference between the central vision and peripheral vision. Results showed that it is possible to display a complicated scene containing many trees in real time by controlling the LOD of tree shapes in the virtual space considering such visual properties. Furthermore, so that reality can be added to the virtual space, we consider that it is important to display the natural sways of wind-blown trees and plants in real time. Therefore, we propose a method for generating sway data for simplified tree shape data based on a simple physical model, in which each branch is connected to several other branches by springs, and also a new texture-mapping technique for rendering simplified tree shapes, making it appear as if the shapes have a high LOD. Finally, we show some examples of images of trees generated in real time by using our method, in which many trees exist and sway due to wind.
Tsutomu MIYASATO Haruo NOMA Fumio KISHINO
This paper describes the results of tests that measured the allowable delay between images and tactile information via a force feedback device. In order to investigate the allowable delay, two experiments were performed: 1) subjective evaluation in real space and 2) subjective evaluation in virtual space using a force feedback device.
Vorawut PRIWAN Hitoshi AIDA Tadao SAITO
This paper studies routing methods for the complete broadcast multipoint-to-multipoint communication. For a Z-node (Z-site) of the participants of the connection, each site transmits one signal and receives Z-1 signals. The routing method based on connecting each participant by multiple directed point-to-point circuits uses wasteful bandwidth that the source-to-destination data may be duplicated needlessly. We propose routing methods that the connection approach is based on setting multicast tree routes that each participant (site) has one own multicast tree connecting to the other participants under two constraints: the delay-bounded constraint of source-destination path and the available constrained bandwidth for the service of links. For this routing approach, we propose both heuristic algorithm finding approximate solution and search enumeration based algorithm finding optimal solution, and compare the approximate solution with the optimal solution. This approach can lower costs for the subscribers and conserves bandwidth resources for the network providers.
Masahide KANEKO Fumio KISHINO Kazunori SHIMAMURA Hiroshi HARASHIMA
Recently, studies aiming at the next generation of visual communication services which support better human communication have been carried out intensively in Japan. The principal motive of these studies is to develop new services which are not restricted to a conventional communication framework based on the transmission of waveform signals. This paper focuses on three important key words in these studies; "intelligent," "real," and "distributed and collaborative," and describes recent research activities. The first key word "intelligent" relates to intelligent image coding. As a particular example, model-based coding of moving facial images is discussed in detail. In this method, shape change and motion of the human face is described by a small number of parameters. This feature leads to the development of new applications such as very low bit-rate transmission of moving facial images, analysis and synthesis of facial expression, human interfaces, and so on. The second key word "real" relates to communication with realistic sensations and virtual space teleconferencing. Among various component technologies, real-time reproduction of 3-D human images and a cooperative work environment with virtual space are discussed in detail. The last key word "distributed and collaborative" relates to collaborative work in a distributed work environment. The importance of visual media in collaborative work, a concept of CSCW, and requirements for realizing a distributed collaborative environment are discussed. Then, four examples of CSCW systems are briefly outlined.
Sound field telecommunication describes a voice communication system, intended to implement a virtual meeting, in which participants at distant sites experience the sensation of sharing a single room for conversation. Binaural synthesis reconstructs the sound propagation pattern of a particular room or environment in the vicinity of each ear, which seems appropriate for a personal multimedia environment. Localization cues in spatial hearing comprise both the sink's transfer function and source attenuation. Sink directional cues are captured by binaural head related transfer functions (HRTFs). Source attenuation is modeled as a frequency-independent function of the direction, dispersion, and distance of the source, capturing sensitivity, amplification, and mutual position. Audio windows, aural analogues of video windows, can be thought of as a user interface to binaural sound presentation for a teleconferencing system. Exocentric representation of audio window entities allows manipulation of all teleconferees in a projected egalitarian medium. We are implementing a system that combines dynamically selected HRTFs with dynamically determined source and sink position, azimuth, focus, and size parameters, controlled via iconic manipulation in a graphical window. With such an interface, users may arrange a virtual conference environment, steering the virtual positions of teleconferees.