The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] models(163hit)

121-140hit(163hit)

  • A Proximity-Based Path Compression Protocol for Mobile Ad Hoc Networks

    Masato SAITO  Hiroto AIDA  Yoshito TOBE  Hideyuki TOKUDA  

     
    PAPER-Ad Hoc Network

      Vol:
    E87-B No:9
      Page(s):
    2484-2492

    This paper presents a path compression protocol for on-demand ad hoc network routing protocols, which is called dynamic path shortening (DPS). In DPS, active route paths adapt dynamically to node mobility based on the "local" link quality estimation at each own node, without exchanging periodic control packets such as Hello messages. Each node monitors its own local link quality only when receiving packets and estimates whether to enter the "proximity" of the neighbor node to shorten active paths in a distributed manner. Simulation results of DPS in several scenarios of various node mobility and traffic flows reveal that adding DPS to DSR which is the conventional prominent on-demand ad hoc routing protocol significantly reduces the end-to-end packet latency up to 50-percent and also the number of routing packets up to 70-percent over the pure DSR, in heavy traffic cases. We also demonstrate the other simulation results obtained by using our two novel mobility models which generate more realistic node mobility than the standard random waypoint mobility model: Random Orientation Mobility and Random Escape Mobility models. Finally, simple performance experiments using DPS implementation on FreeBSD OS demonstrate that DPS shortens active routes in the order of milliseconds (about 5 ms).

  • Plausible Models for Propagation of the SARS Virus

    Michael SMALL  Pengliang SHI  Chi Kong TSE  

     
    PAPER

      Vol:
    E87-A No:9
      Page(s):
    2379-2386

    Using daily infection data for Hong Kong we explore the validity of a variety of models of disease propagation when applied to the SARS epidemic. Surrogate data methods show that simple random models are insufficient and that the standard epidemic susceptible-infected-removed model does not fully account for the underlying variability in the observed data. As an alternative, we consider a more complex small world network model and show that such a structure can be applied to reliably produce simulations quantitative similar to the true data. The small world network model not only captures the apparently random fluctuation in the reported data, but can also reproduce mini-outbreaks such as those caused by so-called "super-spreaders" and in the Hong Kong housing estate of Amoy Gardens.

  • Global and Local Feature Extraction by Natural Elastic Nets

    Jiann-Ming WU  Zheng-Han LIN  

     
    LETTER-Pattern Recognition

      Vol:
    E87-D No:9
      Page(s):
    2267-2271

    This work explores generative models of handwritten digit images using natural elastic nets. The analysis aims to extract global features as well as distributed local features of handwritten digits. These features are expected to form a basis that is significant for discriminant analysis of handwritten digits and related analysis of character images or natural images.

  • The Impact of Source Traffic Distribution on Quality of Service (QoS) in ATM Networks

    Seshasayi PILLALAMARRI  Sumit GHOSH  

     
    PAPER-Network

      Vol:
    E87-B No:8
      Page(s):
    2290-2307

    A principal attraction of ATM networks, in both wired and wireless realizations, is that the key quality of service (QoS) parameters of every call, including end-to-end delay, jitter, and loss are guaranteed by the network when appropriate cell-level traffic controls are imposed at the user network interface (UNI) on a per call basis, utilizing the peak cell rate (PCR) and the sustainable cell rate (SCR) values for the multimedia--voice, video, and data, traffic sources. There are three practical difficulties with these guarantees. First, while PCR and SCR values are, in general, difficult to obtain for traffic sources, the typical user-provided parameter is a combination of the PCR, SCR, and the maximum burstiness over the entire duration of the traffic. Second, the difficulty in accurately defining PCR arises from the requirement that the smallest time interval must be specified over which the PCR is computed which, in the limit, will approach zero or the network's resolution of time. Third, the literature does not contain any reference to a scientific principle underlying these guarantees. Under these circumstances, the issue of providing QoS guarantees in the real world, through traffic controls applied on a per call basis, is rendered uncertain. This paper adopts a radically different, high level approach to the issue of QoS guarantees. It aims at uncovering through systematic experimentation a relationship, if any exists, between the key high level user traffic characteristics and the resulting QoS measures in a realistic operational environment. It may be observed that while each user is solely interested in the QoS of his/her own traffic, the network provider cares for two factors: (1) Maximize the link utilization in the network since links constitute a significant investment, and (2) ensure the QoS guarantees for every user traffic, thereby maintaining customer satisfaction. Based on the observations, this paper proposes a two-phase strategy. Under the first phase, the average "link utilization" computed over all the links in a network is maintained within a range, specified by the underlying network provider, through high level call admission control, i.e. by limiting the volume of the incident traffic on the network, at any time. The second phase is based on the hypothesis that the number of traffic sources, their nature--audio, video, or data, and the bandwidth distribution of the source traffic, admitted subject to a specific chosen value of "link utilization" in the network, will exert a unique influence on the cumulative delay distribution at the buffers of the representative nodes and, hence, on the QoS guarantees of each call. The underlying thinking is as follows. The cumulative buffer delay distribution, at any given node and at any time instant, will clearly reflect the cumulative effect of the traffic distributions of the multiple connections that are currently active on the input links. Any bounds imposed on the cumulative buffer delay distribution at the nodes of the network will also dominate the QoS bounds of each of the constituent user traffic. Thus, for each individual traffic source, the buffer delay distributions at the nodes of the network, obtained for different traffic distributions, may serve as its QoS measure. If the hypothesis is proven true, in essence, the number of traffic sources and their bandwidth distribution will serve asa practically realizable high level traffic control in providing realistic QoS guarantees for every call. To verify the correctness of the hypothesis, an experiment is designed that consists of a representative ATM network, traffic sources that are characterized through representative and realistic user-provided parameters, and a given set of input traffic volumes appropriate for a network provider approved link utilization measure. The key source traffic parameters include the number of sources that are incident on the network and the constituent links at any given time, the bandwidth requirement of the sources, and their nature. For each call, the constituent cells are generated stochastically, utilizing the typical user-provided parameter as an estimate of the bandwidth requirement. Extensive simulations reveal that, for a given link utilization level held uniform throughout the network, while the QoS metrics--end-to-end cell delay, jitter, and loss, are superior in the presence of many calls each with low bandwidth requirement, they are significantly worse when the network carries fewer calls of very high bandwidths. The findings demonstrate the feasibility of guaranteeing QoS for each and every call through high level traffic controls. As for practicality, call durations are relatively long, ranging from ms to even minutes, thereby enabling network management to exercise realistic controls over them, even in a geographically widely dispersed ATM network. In contrast, current traffic controls that act on ATM cells at the UNI face formidable challenge from high bandwidth traffic where cell lifetimes may be extremely short, in the range of µs. The findings also underscore two additional important contributions of this paper. First, the network provider may collect data on the high level user traffic characteristics, compute the corresponding average link utilization in the network, and measure the cumulative buffer delay distributions at the nodes, in an operational network. The provider may then determine, based on all relevant criteria, a range of input and system parameters over which the network may be permitted to operate, the intersection of all of which may yield a realistic network operating point (NOP). During subsequent operation of the network, the network provider may guide and maintain the network at a desired NOP by exercising control over the input and system parameters including link utilization, call admittance based on the requested bandwidth, etc. Second, the finding constitutes a vulnerability of ATM networks which a perpetrator may exploit to launch a performance attack.

  • Compact CMOS Modelling for Advanced Analogue and RF Applications

    Dirk B.M. KLAASSEN  Ronald van LANGEVELDE  Andries J. SCHOLTEN  

     
    INVITED PAPER

      Vol:
    E87-C No:6
      Page(s):
    854-866

    The rapid down-scaling of minimum feature size in CMOS technologies has boosted the RF performance, thereby opening up the RF application area to CMOS. The concurrent reduction of supply voltage pushes the MOSFETs used in circuit design more and more into the moderate-inversion regime of operation. As a consequence, compact MOS models are needed that are accurate in all operating regimes, including the moderate-inversion regime. Advanced analogue applications require accurate modelling of distortion, capacitances and noise. RF application of MOSFETs require the extension of this accurate modelling up to high frequencies and in addition accurate modelling of impedance levels and power gain. The implications for compact MOS models will be discussed, together with the state-of-the-art in compact MOS modelling. Special attention will be paid to some well-known circuit examples, and the compact model requirements needed for a correct description. Where relevant MOS Model 11 will be used to illustrate the discussion.

  • Medical Endoscopic Image Segmentation Using Snakes

    Sung Won YOON  Hai Kwang LEE  Jeong Hoon KIM  Myoung Ho LEE  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E87-D No:3
      Page(s):
    785-789

    Image segmentation is an essential technique of image analysis. In spite of the issues in contour initialization and boundary concavities, active contour models (snakes) are popular and successful methods for segmentation. In this paper, we present a new active contour model, Gaussian Gradient Force snake (GGF snake), for segmentation of an endoscopic image. The GGF snake is less sensitive to contour initialization and it ensures a high accuracy, large capture range, and fast CPU time for computing an external force. It was observed that the GGF snake produced more reasonable results in various image types : simple synthetic images, commercial digital camera images, and endoscopic images, than previous snakes did.

  • Sparse Realization of Passive Reduced-Order Interconnect Models via PRIMA

    Yuya MATSUMOTO  Yuichi TANJI  Mamoru TANAKA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E87-A No:1
      Page(s):
    251-257

    This paper describes a sparse realization of passive reduced-order interconnect models via PRIMA to provide the SPICE compatible models. It is demonstrated that, if the SPICE models are directly realized so that the reduced-order equations obtained via PRIMA are stamped into the MNA matrix, the simulations of networks containing the macromodels become computationally inefficient when size of the reduced-order equations is relatively large. This is due to dense coefficient matrices of the reduced-order equations resulting from congruent transformations in PRIMA. To overcome this disadvantage, we propose a sparse realization of the reduced-order models. Since the expression is equivalent to the reduced-order equations, the passivity of the SPICE models generated is also guaranteed. Computational efficiency on SPICE is demonstrated in performing the transient analysis of circuits containing the proposed macromodels.

  • Progressive Geometry Coding of Partitioned 3D Models

    Masahiro OKUDA  Shin-ichi TAKAHASHI  

     
    PAPER-Man-Machine Systems, Multimedia Processing

      Vol:
    E86-D No:11
      Page(s):
    2418-2425

    Files of 3D mesh models are often large and hence time-consuming to retrieve from a storage device or to download through the network. Most 3D viewing applications need to obtain the entire file of a 3D model in order to display the model, even when the user is interested only in a small part, or a low-resolution version, of the model. Therefore, coding that enables multiresolution and ROI (Region Of Interest) transmission of 3D models is desired. In this paper, we propose a coding algorithm of 3D models based on partitioning schemes. The algorithm actually partitions the 3D meshes into some small sub-meshes according to some geometric criteria (such as curvatures), and then codes each small sub-meshes separately to transmit it progressively to users on demand. The key idea of this paper lies in the mesh partitioning procedure prior to its LOD control, which enables good compression ratio of the mesh data as well as some other good capable properties through network transmission such as ROI coding, view-adaptive transmission, error resilient coding, etc.

  • Discrete Availability Models to Rejuvenate a Telecommunication Billing Application

    Tadashi DOHI  Kazuki IWAMOTO  Hiroyuki OKAMURA  Naoto KAIO  

     
    PAPER-Network Systems and Applications

      Vol:
    E86-B No:10
      Page(s):
    2931-2939

    Software rejuvenation is a proactive fault management technique that has been extensively studied in the recent literature. In this paper, we focus on an example for a telecommunication billing application considered in Huang et al. (1995) and develop the discrete-time stochastic models to estimate the optimal software rejuvenation schedule. More precisely, two software availability models with rejuvenation are formulated via the discrete semi-Markov processes, and the optimal software rejuvenation schedules which maximize the steady-state availabilities are derived analytically. Further, we develop statistically non-parametric algorithms to estimate the optimal software rejuvenation schedules, provided that the complete sample data of failure times are given. Then, a new statistical device, called the discrete total time on test statistics, is introduced. Finally, we examine asymptotic properties for the statistical estimation algorithms proposed in this paper through a simulation experiment.

  • HFET and HBT Modelling for Circuit Analysis

    Iltcho ANGELOV  

     
    INVITED PAPER

      Vol:
    E86-C No:10
      Page(s):
    1968-1976

    Results of the modeling work on FET, HBT microwave devices is presented. The models are implemented in CAD tools. Experimental characteristics are compared with simulated ones and there is good match between measurements and simulations.

  • Fine-Grained Shock Models to Rejuvenate Software Systems

    Hiroki FUJIO  Hiroyuki OKAMURA  Tadashi DOHI  

     
    LETTER

      Vol:
    E86-D No:10
      Page(s):
    2165-2171

    The software rejuvenation is a proactive fault management technique for operational software systems which age due to the error conditions that accrue with time and/or load, and is important for high assurance systems design. In this paper, fine-grained shock models are developed to determine the optimal rejuvenation policies which maximize the system availability. We introduce three kinds of rejuvenation schemes and calculate the optimal software rejuvenation schedules maximizing the system availability for respective schemes. The stochastic models with three rejuvenation policies are extentions of Bobbio et al. (1998, 2001) and represent the failure phenomenon due to the exhaustion of the software resources caused by the memory leak, the fragmentation, etc. Numerical examples are devoted to compare three control schemes quantitatively.

  • Trend Analysis and Prediction with Historical Request Data for Multimedia-on-Demand Systems

    Danny M. P. NG  Eric W. M. WONG  King-Tim KO  Kit-Song TANG  

     
    PAPER-Multimedia Systems

      Vol:
    E86-B No:6
      Page(s):
    2001-2011

    Resource-demanding services such as Multi-media-on-Demand (MOD) become possible as Internet and broadband connections are getting more popular. However, as the sizes of multimedia files grow rapidly, storage of such large files becomes a problem. Since multimedia contents will generally become less popular with time, it is desirable to design a prediction algorithm so that the multimedia content can be unloaded from the server if it is no longer popular. This can relieve the storage problem in an MOD system, and hence spare more space for new multimedia files. In this paper, we analyse the MOD viewing trend in order to understand the viewing behaviour of users and predict the viewing trend of a particular category of multimedia based on the knowledge obtained from its trend analysis. In trend analysis, two additive regression models, exponential-exponential-sum (EES) and exponential-power-sum (EPS), are proposed to improve the fitness of the trend. The most suitable model will then be used for trend prediction based on four proposed approaches, namely Fixed Regression Selection (FRS), Continuous Regression Updating (CRU), Historical Updating (HU) and Continuous Regression with Historical Updating (CRHU). From the numerical results, it is found that CRHU, which is constructed by considering historical trend and new incoming data of viewing requests, is in general the best method in forecasting the request trend of a particular category of multimedia clips.

  • An Adaptive Visual Attentive Tracker with HMM-Based TD Learning Capability for Human Intended Behavior

    Minh Anh Thi HO  Yoji YAMADA  Yoji UMETANI  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E86-D No:6
      Page(s):
    1051-1058

    In the study, we build a system called Adaptive Visual Attentive Tracker (AVAT) for the purpose of developing a non-verbal communication channel between the system and an operator who presents intended movements. In the system, we constructed an HMM (Hidden Markov Models)-based TD (Temporal Difference) learning algorithm to track and zoom in on an operator's behavioral sequence which represents his/her intention. AVAT extracts human intended movements from ordinary walking behavior based on the following two algorithms: the first is to model the movements of human body parts using HMMs algorithm, and the second is to learn the model of the tracker's action using a model-based TD learning algorithm. In the paper, we describe the integrated algorithm of the above two methods: whose linkage is established by assigning the state transition probability in HMM as a reward in TD learning. Experimental results of extracting an operator's hand sign action sequence during her natural walking motion are shown which demonstrates the function of AVAT as it is developed within the framework of perceptual organization. Identification of the sign gesture context through wavelet analysis autonomously provides a reward value for optimizing AVAT's action patterns.

  • Models of Small Microwave Devices in FDTD Simulation

    Qing-Xin CHU  Xiao-Juan HU  Kam-Tai CHAN  

     
    INVITED PAPER

      Vol:
    E86-C No:2
      Page(s):
    120-125

    In the FDTD simulation of microwave circuits, a device in very small size compared with the wavelength is often handled as a lumped element, but it may still occupy more than one cell instead of a wire structure without volume routinely employed in classical extended FDTD algorithms. In this paper, two modified extended FDTD algorithms incorporating a lumped element occupying more than one cell are developed directly from the integral form of Maxwell's equations based on the assumption whether displacement current exists inside the region where a device is present. If the displacement current exists, the modified extended FDTD algorithm can be represented as a Norton equivalent current-source circuit, or otherwise as a Thevenin equivalent voltage-source circuit. These algorithms are applied in the microwave line loaded by a lumped resistor and an active antenna to illustrated the efficiency and difference of the two algorithms.

  • Stereo Matching between Three Images by Iterative Refinement in PVS

    Makoto KIMURA  Hideo SAITO  Takeo KANADE  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:1
      Page(s):
    89-100

    In the field of computer vision and computer graphics, Image-Based-Rendering (IBR) methods are often used to synthesize images from real scene. The image synthesis by IBR requires dense correct matching points in the images. However, IBR does not require 3D geometry reconstruction or camera calibration in Euclidean geometry. On the other hand, 3D reconstructed model can easily point out the occlusion in images. In this paper, we propose an approach to reconstruct 3D shape in a voxel space, which is named Projective Voxel Space (PVS). Since PVS is defined by projective geometry, it requires only weak calibration. PVS is determined by rectifications of the epipolar lines in three images. Three rectified images are orthogonal projected images of a scene in PVS, so processing about image projection is easy in PVS. In both PVS and Euclidean geometry, a point in an image is on a projection from a point on a surface of the object in the scene. Then the other image might have a correct matching point without occlusion, or no matching point because of occlusion. This is a kind of restriction about searching matching points or surface of the object. Taking advantage of simplicity of projection in PVS, the correlation values of points in images are computed, and the values are iteratively refined using the restriction described above. Finally, the shapes of the objects in the scene are acquired in PVS. The reconstructed shape in PVS does not have similarity to 3D shape in Euclidean geometry. However, it denotes consistent matching points in three images, and also indicates the existence of occluded points. Therefore, the reconstructed shape in PVS is sufficient for image synthesis by IBR.

  • Effective Nonlinear Receivers for High Density Optical Recording

    Luigi AGAROSSI  Sandro BELLINI  Pierangelo MIGLIORATI  

     
    PAPER-Optoelectronics

      Vol:
    E85-C No:9
      Page(s):
    1675-1683

    The starting point of this paper is the definition of a nonlinear model of the read out process in high density optical discs. Under high density condition, the signal read out is not a linear process, and suffers also from cross talk. To cope with these problems, the identification of a suitable nonlinear model is required. A physical model based on the optical scalar theory is used to identify the kernels of a nonlinear model based on the Volterra series. Both analysis and simulations show that a second order bidimensional model accurately describes the read out process. Once equipped with the Volterra channel model, we evaluate the performance of various nonlinear receivers. First we consider Nonlinear Adaptive Volterra Equalization (NAVE). Simulations show that the performance of classical structures for linear channels is significantly affected by the nonlinear response. The nonlinear NAVE receiver can achieve better performance than Maximum Likelihood Sequence Estimator (MLSE), with lower complexity. An innovative Nonlinear Maximum Likelihood Sequence Estimator (NMLSE), based on the combination of MLSE and nonlinear Inter-Symbol Interference (ISI) cancellation, is presented. NMLSE offers significant advantages with respect to traditional MLSE, and performs better than traditional equalization for nonlinear channels (like NAVE). Finally, the paper deals with cancellation of cross talk from adjacent tracks. We propose and analyze an adaptive nonlinear cross talk canceller based on a three spot detection system. For the sake of simplicity, all the performance comparisons presented in this paper are based on the assumption that noise is Additive, White, and Gaussian (AWGN model).

  • Target Tracking for Maneuvering Targets Using Multiple Model Filter

    Hiroshi KAMEDA  Takashi MATSUZAKI  Yoshio KOSUGE  

     
    INVITED PAPER-Applications

      Vol:
    E85-A No:3
      Page(s):
    573-581

    This paper proposes a maneuvering target tracking algorithm using multiple model filters. This filtering algorithm is discussed in terms of tracking performance, tracking success rate and tracking accuracies for short sampling interval as compared with other conventional methodology. Through several simulations, validity of this algorithm has been confirmed.

  • Optimal Diagnosable Systems on Cayley Graphs

    Toru ARAKI  Yukio SHIBATA  

     
    PAPER-Graphs and Networks

      Vol:
    E85-A No:2
      Page(s):
    455-462

    In this paper, we investigate self diagnosable systems on multi-processor systems, known as one-step t-diagnosable systems introduced by Preparata et al. Kohda has proposed "highly structured system" to design diagnosable systems such that faulty processors are diagnosed efficiently. On the other hand, it is known that Cayley graphs have been investigated as good models for architectures of large-scale parallel processor systems. We investigate some conditions for Cayley graphs to be topologies for optimal highly structured diagnosable systems, and present several examples of optimal diagnosable systems represented by Cayley graphs.

  • Proposal of an Adaptive Vision-Based Interactional Intention Inference System in Human/Robot Coexistence

    Minh Anh Thi HO  Yoji YAMADA  Takayuki SAKAI  Tetsuya MORIZONO  Yoji UMETANI  

     
    PAPER

      Vol:
    E84-D No:12
      Page(s):
    1596-1602

    The paper proposes a vision-based system for adaptively inferring the interactional intention of a person coming close to a robot, which plays an important role in the succeeding stage of human/robot cooperative handling of works/tools in production lines. Here, interactional intention is ranged in the meaning of the intention to interact/operate with the robot, which is proposed to be estimated by the human head moving path during an incipient period of time. To implement this intention inference capability, first, human entrance is detected and is modeled by an ellipse to supply information about the head position. Second, B-spline technique is used to approximate the trajectory with reduced control points in order that the system acquires information about the human motion direction and the curvature of the motion trajectory. Finally, Hidden Markov Models (HMMs) are applied as the adaptive inference engines at the stage of inferring the human interactional intention. The HMM algorithm with a stochastic pattern matching capability is extended to supply whether or not a person has an intention toward the robot at the incipient time. The reestimation process here models the motion behavior of an human worker when he has or doesn't have the intention to operate the robot. Experimental results demonstrate the adaptability of the inference system using the extended HMM algorithm for filtering out motion deviation over the trajectory.

  • Error Models and Fault-Secure Scheduling in Multiprocessor Systems

    Koji HASHIMOTO  Tatsuhiro TSUCHIYA  Tohru KIKUNO  

     
    PAPER-Fault Tolerance

      Vol:
    E84-D No:5
      Page(s):
    635-650

    A schedule for a parallel program is said to be 1-fault-secure if a system that uses the schedule can either produce correct output for the program or detect the presence of any faults in a single processor. Although several fault-secure scheduling algorithms have been proposed, they can all only be applied to a class of tree-structured task graphs with a uniform computation cost. Besides, they assume a stringent error model, called the redeemable error model, that considers extremely unlikely cases. In this paper, we first propose two new plausible error models which restrict the manner of error propagation. Then we present three fault-secure scheduling algorithms, one for each of the three models. Unlike previous algorithms, the proposed algorithms can deal with any task graphs with arbitrary computation and communication costs. Through experiments, we evaluate these algorithms and study the impact of the error models on the lengths of fault-secure schedules.

121-140hit(163hit)