The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

2701-2720hit(3945hit)

  • Node-to-Set Disjoint Paths Problem in Pancake Graphs

    Keiichi KANEKO  Yasuto SUZUKI  

     
    PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1628-1633

    In this paper, we give an algorithm for the node-to-set disjoint paths problem in pancake graphs with its evaluation results. The algorithm is of polynomial order of n for an n-pancake graph. It is based on recursion and divided into two cases according to the distribution of destination nodes in classes into which all the nodes in a pancake graph are categorized. The sum of lengths of paths obtained and the time complexity of the algorithm are estimated and the average performance is evaluated based on computer simulation.

  • Balanced Bowtie Decomposition of Complete Multigraphs

    Kazuhiko USHIO  Hideaki FUJIMOTO  

     
    PAPER-Graphs and Networks

      Vol:
    E86-A No:9
      Page(s):
    2360-2365

    We show that the necessary and sufficient condition for the existence of a balanced bowtie decomposition of the complete multigraph λKn is n 5 and λ(n-1) 0 (mod 12). Decomposition algorithms are also given.

  • Nonlinear System Control Using Compensatory Neuro-Fuzzy Networks

    Cheng-Jian LIN  Cheng-Hung CHEN  

     
    PAPER-Optimization and Control

      Vol:
    E86-A No:9
      Page(s):
    2309-2316

    In this paper, a Compensatory Neuro-Fuzzy Network (CNFN) for nonlinear system control is proposed. The compensatory fuzzy reasoning method is using adaptive fuzzy operations of neural fuzzy network that can make the fuzzy logic system more adaptive and effective. An on-line learning algorithm is proposed to automatically construct the CNFN. They are created and adapted as on-line learning proceeds via simultaneous structure and parameter learning. The structure learning is based on the fuzzy similarity measure and the parameter learning is based on backpropagation algorithm. The advantages of the proposed learning algorithm are that it converges quickly and the obtained fuzzy rules are more precise. The performance of CNFN compares excellently with other various exiting model.

  • Low Complexity Reverselink Beamforming Based on Simplex Downhill Optimization Method for CDMA Systems

    Joonsung LEE  Changheon OH  Chungyong LEE  Dae-Hee YOUN  

     
    LETTER-Antenna and Propagation

      Vol:
    E86-B No:8
      Page(s):
    2541-2544

    A new beamforming method based on simplex downhill optimaization process has been presented for the reverse link CDMA systems. The proposed system performs code-filtering at each antenna for each user. The new beamforming method gives lower computations and faster convergence properties than existing algorithms. The simulation results show that the proposed algorithm has a better BER performance in the case of the time-varing channel.

  • Extraction of Movement-Related Potentials from EEG Based on DT-Aided Independent Component Analysis

    Kuniaki UTO  Keiichi HIBI  Yukio KOSUGI  

     
    LETTER-Medical Engineering

      Vol:
    E86-D No:8
      Page(s):
    1464-1469

    In this paper, our aim is to extract real-time movement-related potentials, especially readiness-potentials, from EEGs with a small number of scalp electrodes. We proposed a method composed of independent component analysis (ICA), dipole tracing (DT) and scalp Laplacian methods. The proposed method shows a good real-time RP extraction capability from a single-trial of movement by means of the selection of EEGs with high reliability based on the DT and the improvement of the spatial resolution of the scalp potentials based on the scalp Laplacian.

  • Optimum Light-Path Pricing in Survivable Optical Networks

    Nagao OGINO  Masatoshi SUZUKI  

     
    PAPER

      Vol:
    E86-B No:8
      Page(s):
    2358-2367

    Progress in WDM transmission technology and the development of optical cross-connect systems has made optical backbone networks a reality. The conventional planning methodologies for such optical backbone networks calculate optimum light-path arrangements to minimize the network cost under the condition that the number of demanded light-paths is given in advance. However, the light-path demand varies according to the light-path prices. Thus, a new planning methodology for the optical backbone networks is necessary to optimize the light-path prices and to maximize the profit obtained from the network. This paper proposes a new planning methodology for the survivable optical networks. This methodology is based on economic theory for competitive markets involving plural kinds of commodities. Using this methodology, the optimum light-path prices can be decided to maximize the obtained profit. A numerical example is presented to show that the obtained profit can be improved by preparing various light-path classes with different recovery modes and introducing an appropriate light-path pricing according to the reliability of each light-path class.

  • Improvement of the Relative Permittivity Evaluation with a Whispering-Gallery Mode Dielectric Resonator Method

    Hajime TAMURA  Yoshinori KOGAMI  Kazuhito MATSUMURA  

     
    PAPER

      Vol:
    E86-C No:8
      Page(s):
    1665-1671

    Whispering-Gallery mode resonator method has been presented for complex permittivity evaluation of low loss dielectric materials in millimeter wave region. As a problem, it has been found that the evaluation error slightly dependens on the frequency for a sample. It comes from approximated analysis which is used in the procedure. In this paper, a mode-matching method is applied to this evaluation technique to have improvement of the measrued results. It is confirmed experimentally that reliability of the presented method is improved for the millimeter wave permittivity measurement.

  • Further Results on Passification of Non-square Linear Systems Using an Input-Dimensional Compensator

    Young I. SON  Hyungbo SHIM  Nam H. JO  Jin H. SEO  

     
    LETTER-Systems and Control

      Vol:
    E86-A No:8
      Page(s):
    2139-2143

    Passification of a non-square linear system is considered by using a parallel feedforward compensator (PFC) and a squaring gain matrix. In contrast to the previous result, a technical assumption is removed by modifying the structure of the PFC. As a result, the broader class of non-square systems can be made passive by the proposed design method. Using the static output feedback (SOF) algorithms, the input-dimensional PFC and the squaring matrix can be designed systematically. The effectiveness of the proposed method is illustrated by practical system examples in the control literature.

  • Stable Learning Algorithm for Blind Separation of Temporally Correlated Acoustic Signals Combining Multistage ICA and Linear Prediction

    Tsuyoki NISHIKAWA  Hiroshi SARUWATARI  Kiyohiro SHIKANO  

     
    PAPER

      Vol:
    E86-A No:8
      Page(s):
    2028-2036

    We newly propose a stable algorithm for blind source separation (BSS) combining multistage ICA (MSICA) and linear prediction. The MSICA is the method previously proposed by the authors, in which frequency-domain ICA (FDICA) for a rough separation is followed by time-domain ICA (TDICA) to remove residual crosstalk. For temporally correlated signals, we must use TDICA with a nonholonomic constraint to avoid the decorrelation effect from the holonomic constraint. However, the stability cannot be guaranteed in the nonholonomic case. To solve the problem, the linear predictors estimated from the roughly separated signals by FDICA are inserted before the holonomic TDICA as a prewhitening processing, and the dewhitening is performed after TDICA. The stability of the proposed algorithm can be guaranteed by the holonomic constraint, and the pre/dewhitening processing prevents the decorrelation. The experiments in a reverberant room reveal that the algorithm results in higher stability and separation performance.

  • A Fundamental Theory for Operation of Network Systems Extraordinarily Complicated and Diversified on Large-Scales

    Kazuo HORIUCHI  

     
    LETTER-Nonlinear Problems

      Vol:
    E86-A No:8
      Page(s):
    2149-2151

    A mathematical theory is proposed, based on the concept of functional analysis, suitable for operation of network systems extraordinarily complicated and diversified on large scales, through connected-block structures. Fundamental conditions for existence and evaluation of system behaviors of such network systems are obtained in a form of fixed point theorem for system of nonlinear mappings.

  • Directions-of-Arrival Tracking of Coherent Cyclostationary Signals in Array Processing

    Jingmin XIN  Akira SANO  

     
    PAPER

      Vol:
    E86-A No:8
      Page(s):
    2037-2046

    In this paper, we consider the problem of estimating the time-varying directions-of-arrival (DOAs) of coherent narrowband cyclostationary signals impinging on a uniform linear array (ULA). By exploiting the cyclostationarity of most communication signals, we investigate a new computationally efficient subspace-based direction estimation method without eigendecomposition and spatial smoothing (SS) processes. The proposed method uses the inherently temporal property of incident signals and a subarray scheme to decorrelate the signal coherency and to suppress the noise and interfering signals, while the null subspace is obtained from the resulting cyclic correlation matrix through a linear operation. Then an on-line implementation of this method is presented for tracking the DOAs of slowly moving coherent signals. The proposed algorithm is computationally simple and has a good tracking performance. The effectiveness of the proposed method is verified through numerical examples.

  • Does Reinforcement Learning Simulate Threshold Public Goods Games?: A Comparison with Subject Experiments

    Atsushi IWASAKI  Shuichi IMURA  Sobei H. ODA  Itsuo HATONO  Kanji UEDA  

     
    PAPER

      Vol:
    E86-D No:8
      Page(s):
    1335-1343

    This paper examines the descriptive power and the limitations of a simple reinforcement learning model (REL), comparing the simulation results with the results of an economic experiment employing human subjects. Agent-based computational economics and experimental economics are becoming increasingly popular as tools for economists. A new variety of learning model using games with a unique equilibrium is proposed and examined in both of the fields mentioned above. However, little attention is given to games with multiple equilibria. We examine threshold public goods games with two types of equilibria, where each player in a five-person group simultaneously contributes the public goods from her private endowments. In the experiments, we observe two patterns of the subjects' behavior: the cooperative and non-cooperative patterns. Our simulation results show that the REL reproduces the cooperative pattern, but does not reproduce the non-cooperative pattern. However, the results suggest that the REL does reproduce the non-cooperative pattern in terms of the agents' internal states. That implies that deterministic strategies would be required to reproduce the non-cooperative pattern in the games. We show an example of the REL with deterministic strategies.

  • Trade-Offs in Custom Circuit Designs for Subgraph Isomorphism Problems

    Shuichi ICHIKAWA  Hidemitsu SAITO  Lerdtanaseangtham UDORN  Kouji KONISHI  

     
    PAPER-VLSI Systems

      Vol:
    E86-D No:7
      Page(s):
    1250-1257

    Many application programs can be modeled as a subgraph isomorphism problem. However, this problem is generally NP-complete and difficult to compute. A custom computing circuit is a prospective solution for such problems. This paper examines various accelerator designs for subgraph isomorphism problems based on Ullmann's algorithm and Konishi's algorithm. These designs are quantitatively evaluated from two points of view: logic scale and execution time. Our study revealed that Ullmann's design is faster but larger in logic scale. Partially sequential versions of Ullmann's algorithm can be more cost-effective than Ullmann's original design. The hardware of Konishi's algorithm is smaller in logic scale, operates at a higher frequency, and is more cost-effective.

  • An Adaptive Multihop Clustering Scheme for Ad Hoc Networks with High Mobility

    Tomoyuki OHTA  Shinji INOUE  Yoshiaki KAKUDA  Kenji ISHIDA  

     
    PAPER

      Vol:
    E86-A No:7
      Page(s):
    1689-1697

    A clustering scheme for ad hoc networks is aimed at managing a number of mobile devices by utilizing hierarchical structure of the networks. In order to construct and maintain an effective hierarchical structure in ad hoc networks where mobile devices may move at high mobility, the following requirements must be satisfied. (1) The role of each mobile device for the hierarchical structure is adaptive to dynamic change of the topology of the ad hoc networks. The role of each mobile device should thus change autonomously based on local information in each mobile device. (2) The overhead for management of the hierarchical structure is small. The number of mobile devices in each cluster should thus be almost equivalent. This paper proposes an adaptive multihop clustering scheme for highly mobile ad hoc networks. The results obtained by extensive simulation experiments show that the proposed scheme does not depend on mobility and node degree of mobile devices in the network, which satisfy the above requirements.

  • Two-Tier Checkpointing Algorithm Using MSS in Wireless Networks

    Kyue-Sup BYUN  Sung-Hwa LIM  Jai-Hoon KIM  

     
    PAPER-Network Management/Operation

      Vol:
    E86-B No:7
      Page(s):
    2136-2142

    This paper presents a two-tier coordinated checkpointing algorithm which can reduce the number of messages by being composed of two levels in mobile computing. Thus mobile devices have a high mobility and are lack of resources (e.g., storage, bandwidth, and battery power), traditional distributed algorithms like coordinated checkpointing algorithms could not be applied properly in mobile environment. In our proposed two-tier coordinated checkpointing algorithm, the messages to be transferred are requested by the mobile hosts and are handled by the appropriate MSS's (Mobile Support Stations). And the broadcast messages are handled by MSS instead of relaying the messages to all the mobile hosts directly as with the previous algorithms. This can reduce the communication cost and maintain the overall system consistency. In wireless cellular network, mobile computing based on a two-tier coordinated checkpointing algorithm reduces the number of synchronization messages. We perform performance comparisons by parametric analysis to show that a two-tier coordinated checkpointing algorithm can reduce communication cost compared to the previous algorithms in which the messages are directly sent to the mobile hosts.

  • Compound-Error-Correcting Codes and Their Augmentation

    Masaya FUJISAWA  Shusuke MAEDA  Shojiro SAKATA  

     
    PAPER-Coding Theory

      Vol:
    E86-A No:7
      Page(s):
    1813-1819

    A compound error is any combination of burst errors with various burst lengths including random errors. The compound weight of any such error is defined as a kind of combinational metric which is a generalization of Gabidulin's metric. First, we present a fast method for calculating the weight of any word. Based on this method, as an extension of Wadayama's augmenting method in the case of Hamming weight, we propose a method of constructing codes having higher coding rate by augmenting any compound-error-correcting codes. Furthermore, we show some examples of good compound-error-correcting codes obtained by using our augmenting method.

  • Model Selection with Componentwise Shrinkage in Orthogonal Regression

    Katsuyuki HAGIWARA  

     
    PAPER-Digital Signal Processing

      Vol:
    E86-A No:7
      Page(s):
    1749-1758

    In the problem of determining the major frequency components of a signal disturbed by noise, a model selection criterion has been proposed. In this paper, the criterion has been extended to cover a penalized cost function that yields a componentwise shrinkage estimator, and it exhibited a consistent model selection when the proposed criterion was used. Then, a simple numerical simulation was conducted, and it was found that the proposed criterion with an empirically estimated componentwise shrinkage estimator outperforms the original criterion.

  • Adaptive Blind Source Separation Using a Risk-Sensitive Criterion

    Junya SHIMIZU  

     
    PAPER-Digital Signal Processing

      Vol:
    E86-A No:7
      Page(s):
    1724-1731

    An adaptive blind signal separation filter is proposed using a risk-sensitive criterion framework. This criterion adopts an exponential type function. Hence, the proposed criterion varies the consideration weight of an adaptation quantity depending on errors in the estimates: the adaptation is accelerated when the estimation error is large, and unnecessary acceleration of the adaptation does not occur close to convergence. In addition, since the algorithm derivation process relates to an H filtering, the derived algorithm has robustness to perturbations or estimation errors. Hence, this method converges faster than conventional least squares methods. Such effectiveness of the new algorithm is demonstrated by simulation.

  • Improvement of Cone Beam CT Image Using Singularity Detection

    Yi-Qiang YANG  Nobuyuki NAKAMORI  Yasuo YOSHIDA  

     
    PAPER

      Vol:
    E86-D No:7
      Page(s):
    1206-1213

    In medical diagnosis, cone beam CT increases the dose absorbed by a patient. However, the radiographic noise (such as quantum noise) in a CT image increases when radiation exposure is reduced. In this paper, we propose a method to improve the CT image degraded by the quantum mottle based on 2-D wavelet transform modulus sum (WTMS). The noise and regular parts of an image can be observed by tracing the evolution of its 2-D WTMS across scales. Our experimental results show that most of the quantum mottle in the 2-D projections is removed by the proposed method and the edges preserved well. We investigate the relation between the number of X-ray photons and the quality of the denoised images. The result shows the possibility that a patient's dose can be reduced about 10% with the same visual quality by our method.

  • Compression of 3D Models by Remesh on Texture Images

    Masahiro OKUDA  Kyoko NAGATOMO  Masaaki IKEHARA  Shin-ichi TAKAHASHI  

     
    PAPER-Computer Graphics

      Vol:
    E86-D No:6
      Page(s):
    1110-1115

    Due to the rapid development of computer and information technology, 3D modeling and rendering capabilities are becoming increasingly important in many applications, including industrial design, architecture, CAD/CAM, video games, and medical imaging. Since 3D mesh models often have huge amounts of the data, it is time-consuming to retrieve from a storage device or to download from the network. Most 3D viewing applications need to obtain the entire file of a 3D model in order to display the model, even when the user is interested only in a low-resolution version of the model. Therefore, progressive coding that enables multiresolution transmission of 3D models is desired. In this paper, we propose the progressive coding scheme of 3D meshes with texture, in which we convert irregular meshes to semi-regular using texture coordinates, map them on planes, and apply 2D image coding algorithm to mesh compression. As our method uses the wavelet transform, the encoded bitstream has a progressive nature. We gain high compression rate with the same visual quality as original models.

2701-2720hit(3945hit)