The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] human(269hit)

201-220hit(269hit)

  • Formal Detection of Three Automation Surprises in Human-Machine Interaction

    Yoshitaka UKAWA  Toshimitsu USHIO  Masakazu ADACHI  Shigemasa TAKAI  

     
    PAPER-Concurrent Systems

      Vol:
    E87-A No:11
      Page(s):
    2878-2884

    In this paper, we propose a formal method for detection of three automation surprises in human-machine interaction; a mode confusion, a refusal state, and a blocking state. The mode confusion arises when a machine is in a different mode from that anticipated by the user, and is the most famous automation surprise. The refusal state is a situation that the machine does not respond to a command the user executes. The blocking state is a situation where an internal event occurs, leading to change of an interface the user does not know. In order to detect these phenomena, we propose a composite model in which a machine and a user model evolve concurrently. We show that the detection of these phenomena in human-machine interaction can be reduced to a reachability problem in the composite model.

  • A Simple Method for Facial Pose Detection

    Min Gyo CHUNG  Jisook PARK  Jiyoun DONG  

     
    LETTER-Image and Signal Processing

      Vol:
    E87-A No:10
      Page(s):
    2585-2590

    Much of the work on faces in computer vision has been focused on face recognition or facial expression analysis, but has not been directly related with face direction detection. In this paper, we propose a vision-based approach to detect a face direction from a single monocular view of a face by using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, the proposed method introduces simple formulas to detect face rotation, horizontally and vertically, using the facial triangle. It makes no assumption about the structure of the face and produces an accurate estimate of face direction.

  • Personal Entropy from Graphical Passwords: Methods for Quantification and Practical Key Generation

    Masato AKAO  Shinji YAMANAKA  Goichiro HANAOKA  Hideki IMAI  

     
    PAPER-Cryptography and Information Security

      Vol:
    E87-A No:10
      Page(s):
    2543-2554

    In many cryptosystems incorporating human beings, the users' limited memories and their indifference to keeping the systems secure may cause some severe vulnerability of the whole systems. Thus we need more studies on personal entropy, from an information theoretical point of view, to capture the characteristics of human beings as special information sources for cryptosystems. In this paper, we discuss and analyze the use of personal entropy for generating cryptographic keys. In such a case, it is crucially important to precisely evaluate the amount of personal entropy that indicates the actual key length. We propose an advanced key generation scheme based on the conventional graphical passwords proposed in [12]. We improve them to make the most of the secret information extracted in one drawing, i.e., we incorporate the on-line pen pressure and pen inclination information in addition to utilize more secret information. We call the scheme dynamic graphical passwords, and propose a practical construction of them. We also show a precise way of quantifying their entropy, and finally, as an experimental result, we can generate a key of over 110-bit long, using the data of a single drawing. When quantifying their entropy, we need to precisely evaluate the entropy of graphical passwords as well as that of the on-line information of pen movements. We need to precisely evaluate the entropy of graphical passwords by considering the users' biased choices of their graphical passwords. It is expected that they tend to choose their passwords that are memorable as easily as possible, thus we quantify the burden of memorizing each graphical password by the length of its description using a special language based on [12]. We improve the approach in [12] by more directly reflecting how easily each graphical password can be memorized.

  • Interpolation and Extrapolation of Repeated Motions Obtained with Magnetic Motion Capture

    Kiyoshi HOSHINO  

     
    PAPER

      Vol:
    E87-A No:9
      Page(s):
    2401-2407

    In this study, a CG animation tool was designed that allows interpolation and extrapolation of two types of repeated motions including finger actions, for quantitative analyses of the relationship between features of human motions and subjective impressions. Three-dimensional human motions are measured with a magnetic motion capture and a pair of data gloves, and then relatively accurate time-series joint data are generated utilizing statistical characteristics. Based on the data thus obtained, time-series angular data of each joint for two dancing motions is transformed into frequency domain by Fourier transform, and spectral shape of each dancing action is interpolated. The interpolation and extrapolation of two motions can be synthesized with simple manner by changing an weight parameter while keeping good harmony of actions. Using this CG animation tool as a motion synthesizer, repeated human motions such as a dancing action that gives particular impressions on the observers can be quantitatively measured and analyzed by the synthesis of actions.

  • "Man-Computer Symbiosis" Revisited: Achieving Natural Communication and Collaboration with Computers

    Neal LESH  Joe MARKS  Charles RICH  Candace L. SIDNER  

     
    INVITED PAPER

      Vol:
    E87-D No:6
      Page(s):
    1290-1298

    In 1960, the famous computer pioneer J.C.R. Licklider described a vision for human-computer interaction that he called "man-computer symbiosis. " Licklider predicted the development of computer software that would allow people "to think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own. " More than 40 years later, one rarely encounters any computer application that comes close to capturing Licklider's notion of human-like communication and collaboration. We echo Licklider by arguing that true symbiotic interaction requires at least the following three elements: a complementary and effective division of labor between human and machine; an explicit representation in the computer of the user's abilities, intentions, and beliefs; and the utilization of nonverbal communication modalities. We illustrate this argument with various research prototypes currently under development at Mitsubishi Electric Research Laboratories (USA).

  • Structures of Human Relations and User-Dynamics Revealed by Traffic Data

    Masaki AIDA  Keisuke ISHIBASHI  Hiroyoshi MIWA  Chisa TAKANO  Shin-ichi KURIBAYASHI  

     
    PAPER

      Vol:
    E87-D No:6
      Page(s):
    1454-1460

    The number of customers of a service for Internet access from cellular phones in Japan has been explosively increasing for some time. We analyze the relation between the number of customers and the volume of traffic, with a view to finding clues to the structure of human relations among the very large set of potential customers of the service. The traffic data reveals that this structure is a scale-free network, and we calculate the exponent that governs the distribution of node degree in this network. The data also indicates that people who have many friends tend to subscribe to the service at an earlier stage. These results are useful for investigating various fields, including marketing strategies, the propagation of rumors, the spread of computer viruses, and so on.

  • Robotic Hand System for Non-verbal Communication

    Kiyoshi HOSHINO  Ichiro KAWABUCHI  

     
    PAPER

      Vol:
    E87-D No:6
      Page(s):
    1347-1353

    The purpose of this study is to design a humanoid robotic hand system that is capable of conveying feelings and sensitivities by finger movement for the non-verbal communication between men and robots in the near future. In this paper, studies have been made in four steps. First, a small-sized and light-weight robotic hand was developed to be used as the humanoid according to the concept of extracting required minimum motor functions and implementing them to the robot. Second, basic characteristics of the movement were checked by experiments, simple feedforward control mechanism was designed based on velocity control, and a system capable of tracking joint time-series change command with arbitrary pattern input was realized. Third, tracking performances with regard to sinusoidal input with different frequencies were studied for evaluation of the system thus realized, and space- and time-related accuracy were investigated. Fourth, the sign language motions were generated as examples of information transmission by finger movement. A series of results thus obtained indicated that this robotic hand is capable of transmitting information promptly with comparatively high accuracy through the movement.

  • New Cycling Environments Using Multimodal Knowledge and Ad-hoc Network

    Sachiyo YOSHITAKI  Yutaka SAKANE  Yoichi TAKEBAYASHI  

     
    PAPER

      Vol:
    E87-D No:6
      Page(s):
    1377-1385

    We have been developing new cycling environments by using knowledge sharing and speech communication. We have offered multimodal knowledge contents to share knowledge on safe and exciting cycling. We accumulated 140 contents, focused on issues such as riding techniques, trouble shootings, and preparations on cycling. We have also offered a new way of speech communication using an ad-hoc wireless LAN technology for safe cycling. Group cycling requires frequent communication to lead the group safely. Speech communication achieves spontaneous communication between group members without looking around or speaking loudly. Experimental result through actual cycling has shown the effectiveness of sharing multimodal knowledge contents and speech communication. Our new developed environment has an advantage of increasing multimodal knowledge through the accumulation of personal experiences of actual cycling.

  • Wearable Moment Display Device for Nonverbal Communications

    Hideyuki ANDO  Maki SUGIMOTO  Taro MAEDA  

     
    PAPER

      Vol:
    E87-D No:6
      Page(s):
    1354-1360

    There has recently been considerable interest in research on wearable non-grounded force display. However, there have been no developments for the communication of nonverbal information (ex. tennis and golf swing). We propose a small and lightweight wearable force display to present motion timing and direction. The display outputs a torque using rotational moment and mechanical brakes. We explain the principle of this device, and describe an actual measurement of the torque and torque sensitivity experiments.

  • Real-Time Human Motion Analysis by Image Skeletonization

    Hironobu FUJIYOSHI  Alan J. LIPTON  Takeo KANADE  

     
    PAPER-Face

      Vol:
    E87-D No:1
      Page(s):
    113-120

    In this paper, a process is described for analysing the motion of a human target in a video stream. Moving targets are detected and their boundaries extracted. From these, a "star" skeleton is produced. Two motion cues are determined from this skeletonization: body posture, and cyclic motion of skeleton segments. These cues are used to determine human activities such as walking or running, and even potentially, the target's gait. Unlike other methods, this does not require an a priori human model, or a large number of "pixels on target". Furthermore, it is computationally inexpensive, and thus ideal for real-world video applications such as outdoor video surveillance.

  • Facial Parts Recognition by Hierarchical Tracking from Motion Image and Its Application

    Takuma FUNAHASHI  Tsuyoshi YAMAGUCHI  Masafumi TOMINAGA  Hiroyasu KOSHIMIZU  

     
    PAPER-Face

      Vol:
    E87-D No:1
      Page(s):
    129-135

    Faces of a person performing freely in front of the camera can be captured in a sufficient resolution for facial parts recognition by the proposed camera system enhanced with a special PTZ camera. Head region, facial parts regions such as eyes and mouth and the borders of facial parts are extracted hierarchically by being guided by the irises and nostrils preliminarily extracted from the images of PTZ camera. In order to show the effectivity of this system, we proposed a possibility to generate the borders of facial parts of the face for the facial caricaturing and to introduce eye-contacting facial images which can eye-contact bilaterally with each other on the TV conference environment.

  • Novel Watermark Embedding Technique Based on Human Visual System

    Yong Ju JUNG  Yong Man RO  

     
    LETTER-Image

      Vol:
    E86-A No:11
      Page(s):
    2903-2907

    A good watermark is known to be perceptually invisible, undetectable without key and robust to spatial/temporal data modification. In this paper, we utilize the characteristics of the human visual system (HVS) for watermarking. In HVS, the response of visual cortex decomposes the image spectra into perceptual channels that are octave bands in spatial frequency. Based on the octave-bands division, same numbers of bits of the watermark are inserted into each channel. Experimental results show that the proposed method based on HVS method gives strong robustness to the attacks compared with conventional DCT, wavelet and DFT watermarking methods.

  • Designing and Evaluating Animated Agents as Social Actors

    Helmut PRENDINGER  Mitsuru ISHIZUKA  

     
    PAPER

      Vol:
    E86-D No:8
      Page(s):
    1378-1385

    Recent years have witnessed a growing interest in employing animated agents for tasks that are typically performed by humans. They serve as communicative partners in a variety of applications, such as tutoring systems, sales, or entertainment. This paper first discusses design principles for animated agents to enhance their effectiveness as tutors, sales persons, or actors, among other roles. It is argued that agents should support their perception as social actors by displaying human-like social cues such as affect and gestures. An architecture for emotion-based agents will be described and a simplified version of the model will be illustrated by two interaction scenarios that feature cartoon-style characters and can be run in a web browser. The second focus of this paper is an empirical evaluation of the effect of an affective agent on users' emotional state which is derived from physiological signals of the user. Our findings suggest that an agent with affective behavior may significantly decrease user frustration.

  • An Adaptive Visual Attentive Tracker with HMM-Based TD Learning Capability for Human Intended Behavior

    Minh Anh Thi HO  Yoji YAMADA  Yoji UMETANI  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E86-D No:6
      Page(s):
    1051-1058

    In the study, we build a system called Adaptive Visual Attentive Tracker (AVAT) for the purpose of developing a non-verbal communication channel between the system and an operator who presents intended movements. In the system, we constructed an HMM (Hidden Markov Models)-based TD (Temporal Difference) learning algorithm to track and zoom in on an operator's behavioral sequence which represents his/her intention. AVAT extracts human intended movements from ordinary walking behavior based on the following two algorithms: the first is to model the movements of human body parts using HMMs algorithm, and the second is to learn the model of the tracker's action using a model-based TD learning algorithm. In the paper, we describe the integrated algorithm of the above two methods: whose linkage is established by assigning the state transition probability in HMM as a reward in TD learning. Experimental results of extracting an operator's hand sign action sequence during her natural walking motion are shown which demonstrates the function of AVAT as it is developed within the framework of perceptual organization. Identification of the sign gesture context through wavelet analysis autonomously provides a reward value for optimizing AVAT's action patterns.

  • Characteristics of Human Skin Impedance Including at Biological Active Points

    Jeong-Woo LEE  Dong-Man KIM  Il-Yong PARK  Hee-Joon PARK  Jin-Ho CHO  

     
    LETTER

      Vol:
    E86-A No:6
      Page(s):
    1476-1479

    The electrical characteristics of biologically active points (BAPs) compared with those of the surrounding human skins are investigated. We confirm that BAPs have lower resistance and higher capacitance than the surrounding skins have. We find that BAPs have higher characteristic frequency than surrounding skins and sometimes the impedance spectra of BAPs have two semicircles on the complex impedance plane. Therefore, we propose the skin impedance model that is proper to the BAPs. This model describes our experimental results sufficiently.

  • Analysis of Microwave Power Absorption in a Multilayered Cylindrical Human Model near a Corner Wall

    Shuzo KUWANO  

     
    PAPER-Electromagnetic Compatibility(EMC)

      Vol:
    E86-B No:2
      Page(s):
    838-843

    A large part of our daily lives is spent surrounded by buildings and other structures. In this paper, we used an infinitelength, multilayered cylindrical model to rigorously analyze the microwave specific absorption rate (SAR) of a human standing near a 90corner wall. At frequencies above 1 GHz, the interactions between the microwaves, the human body (including layer resonance), and the corner cause complex changes in the average SAR. We have shown numerically that the SAR with a corner present is up to four times larger than when there is no corner, and that the average SAR of TE waves at frequencies below 1 GHz is up to 10 times greater than when there is no corner.

  • A Knowledge-Based Information Modeling for Autonomous Humanoid Service Robot

    Haruki UENO  

     
    PAPER-System

      Vol:
    E85-D No:4
      Page(s):
    657-665

    This paper presents the concepts and methodology of knowledge-based information modeling based on Cognitive Science for realizing the autonomous humanoid service robotic arm and hand system HARIS. The HARIS robotic system consists of model-based 3D vision, intelligent scheduler, computerized arm/hand controller, humanoid HARIS arm/hand unit and human interface, and aims to serve the aged and disabled on desk-top object manipulations. The world model, i.e., a shared knowledge base, is introduced to work as a communication channel among the software modules. The task scheduling as well as the 3D-vision is based on Cognitive Science, i.e., a human's way of vision and scheduling is considered in designing the knowledge-based software system. The key idea is to use "words" in describing a scene, scheduling tasks, controlling an arm and hand, and interacting with a human. The world model plays a key role in fusing a variety of distributed functions. The generalized frame-based knowledge engineering environment ZERO++ has been effectively used as a software platform in implementing the system. The experimental system is working within a limited situation successfully. Through the introduction of Cognitive Science-based information modeling we have learned useful hints for realizing human-robot symbiosis, that is our long term goal of the project.

  • Software Creation: Clich as Intermediate Knowledge in Software Design

    Hassan ABOLHASSANI  Hui CHEN  Zenya KOONO  

     
    PAPER-Software Engineering

      Vol:
    E85-D No:1
      Page(s):
    221-232

    This paper reports on clich and related mechanisms appearing in a process of human design of software. During studies on human design knowledge, the authors found frequent instance of same pattern of detailing, named clich. In our study, clich is an intermediate level of design knowledge, during a hierarchical detailing step, residing in between simple reuse and creation by micro design rules, which have already been reported. These three kinds of design knowledge are of various types and have different complexities. Discussions on them, focusing on clich type, with procedures of formation of a simple clich skeleton and generation of a clich are given. The studies show a working model of Zipf's principle, and are some trials to reveal a more detail of human designs.

  • Fiber Tract Following in the Human Brain Using DT-MRI Data

    Peter J. BASSER  Sinisa PAJEVIC  Carlo PIERPAOLI  Akram ALDROUBI  

     
    INVITED PAPER

      Vol:
    E85-D No:1
      Page(s):
    15-21

    In Vivo Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) can now be used to elucidate and investigate major nerve pathways in the brain. Nerve pathways are constructed by a) calculating a continuous diffusion tensor field from the discrete, noisy, measured DT-MRI data and then b) solving an equation describing the evolution of a fiber tract, in which the local direction vector of the trajectory is identified with the direction of maximum apparent diffusivity. This approach has been validated previously using synthesized, noisy DT-MRI data. Presently, it is possible to reconstruct large white matter structures in the brain, such as the corpus callosum and the pyramidal tracts. Several problems, however, still affect the method's reliability. Its accuracy degrades where the fiber-tract directional distribution is non-uniform, and background noise in diffusion weighted MRIs can cause computed trajectories to jump to different tracts. Nonetheless, this method can provide quantitative information with which to visualize and study connectivity and continuity of neural pathways in the central and peripheral nervous systems in vivo, and holds promise for elucidating architectural features in other fibrous tissues and ordered media.

  • Real Time Feature-Based Facial Tracking Using Lie Algebras

    Akira INOUE  Tom DRUMMOND  Roberto CIPOLLA  

     
    LETTER

      Vol:
    E84-D No:12
      Page(s):
    1733-1738

    We have developed a novel human facial tracking system that operates in real time at a video frame rate without needing any special hardware. The approach is based on the use of Lie algebra, and uses three-dimensional feature points on the targeted human face. It is assumed that the roughly estimated facial model (relative coordinates of the three-dimensional feature points) is known. First, the initial feature positions of the face are determined using a model fitting technique. Then, the tracking is operated by the following sequence: (1) capture the new video frame and render feature points to the image plane; (2) search for new positions of the feature points on the image plane; (3) get the Euclidean matrix from the moving vector and the three-dimensional information for the points; and (4) rotate and translate the feature points by using the Euclidean matrix, and render the new points on the image plane. The key algorithm of this tracker is to estimate the Euclidean matrix by using a least square technique based on Lie algebra. The resulting tracker performed very well on the task of tracking a human face.

201-220hit(269hit)