The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Z(5900hit)

2821-2840hit(5900hit)

  • Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations

    Andrzej CICHOCKI  Anh-Huy PHAN  

     
    INVITED PAPER

      Vol:
    E92-A No:3
      Page(s):
    708-721

    Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multi-sensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the alpha and beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical NMF approach [3].

  • An Efficient Initialization Scheme for SOM Algorithm Based on Reference Point and Filters

    Shu-Ling SHIEH  I-En LIAO  Kuo-Feng HWANG  Heng-Yu CHEN  

     
    PAPER-Data Mining

      Vol:
    E92-D No:3
      Page(s):
    422-432

    This paper proposes an efficient self-organizing map algorithm based on reference point and filters. A strategy called Reference Point SOM (RPSOM) is proposed to improve SOM execution time by means of filtering with two thresholds T1 and T2. We use one threshold, T1, to define the search boundary parameter used to search for the Best-Matching Unit (BMU) with respect to input vectors. The other threshold, T2, is used as the search boundary within which the BMU finds its neighbors. The proposed algorithm reduces the time complexity from O(n2) to O(n) in finding the initial neurons as compared to the algorithm proposed by Su et al. [16] . The RPSOM dramatically reduces the time complexity, especially in the computation of large data set. From the experimental results, we find that it is better to construct a good initial map and then to use the unsupervised learning to make small subsequent adjustments.

  • Minimum Shield Insertion on Full-Chip RLC Crosstalk Budgeting Routing

    Peng-Yang HUNG  Ying-Shu LOU  Yih-Lang LI  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E92-A No:3
      Page(s):
    880-889

    This work presents a full-chip RLC crosstalk budgeting routing flow to generate a high-quality routing design under stringent crosstalk constraints. Based on the cost function addressing the sensitive nets in visited global cells for each net, global routing can lower routing congestion as well as coupling effect. Crosstalk-driven track routing minimizes capacitive coupling effects and decreases inductive coupling effects by avoiding placing sensitive nets on adjacent tracks. To achieve inductive crosstalk budgeting optimization, the shield insertion problem can be solved with a minimum column covering algorithm which is undertaken following track routing to process nets with an excess of inductive crosstalk. The proposed routing flow method can identify the required number of shields more accurately, and process more complex routing problems than the linear programming (LP) methods. Results of this study demonstrate that the proposed approach can effectively and quickly lower inductive crosstalk by up to one-third.

  • Low-Complexity Equalizer for OFDM Systems in Doubly-Selective Fading Channels

    Namjeong LEE  Hoojin LEE  Joonhyuk KANG  Gye-Tae GIL  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E92-B No:3
      Page(s):
    1031-1034

    In this letter, we propose a computationally effient equalization technique that employs block minimum mean squared error (MMSE) depending on LDLH factorization. Parallel interference cancellation (PIC) is executed with pre- obtained output to provide more reliable symbol detection. In particular, the band structure of the frequency domain channel matrix is exploited to reduce the implementation complexity. It is shown through computer simulation that the proposed technique requires lower complexity than the conventional algorithm to obtain the same performance, and that it exhibits better performance than the conventional counterpart when the same complexity is assumed.

  • Design of a Non-linear Quantizer for Transform Domain DVC

    Murat B. BADEM  Rajitha WEERAKKODY  Anil FERNANDO  Ahmet M. KONDOZ  

     
    PAPER-Digital Signal Processing

      Vol:
    E92-A No:3
      Page(s):
    847-852

    Distributed Video Coding (DVC) is an emerging video coding paradigm that is characterized by a flexible architecture for designing very low cost video encoders. This feature could be very effectively utilized in a number of potential many-to-one type video coding applications. However, the compression efficiency of the latest DVC implementations still falls behind the state-of-the-art in conventional video coding technologies, namely H.264/AVC. In this paper, a novel non-linear quantization algorithm is proposed for DVC in order to improve the rate-distortion (RD) performance. The proposed solution is expected to exploit the dominant contribution to the picture quality from the relatively small coefficients when the high concentration of the coefficients near zero as evident when the residual input video signal for the Wyner-Ziv frames is considered in the transform domain. The performance of the proposed solution incorporating the non-linear quantizer is compared with the performance of an existing transform domain DVC solution that uses a linear quantizer. The simulation results show a consistently improved RD performance at all bitrates when different test video sequences with varying motion levels are considered.

  • Zero-Forcing Beamforming Multiuser-MIMO Systems with Finite Rate Feedback for Multiple Stream Transmission per User

    Masaaki FUJII  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E92-B No:3
      Page(s):
    1035-1038

    We describe a channel-vector quantization scheme that is suitable for multiple stream transmission per user in zero-forcing beamforming (ZFBF) multiuser multiple-input and multiple output (MU-MIMO) systems with finite rate feedback. Multiple subsets of a channel matrix are quantized to vectors from random vector codebooks for finite rate feedback. The quantization vectors with an angle difference that is closer to orthogonal are then selected and their indexes are fed back to the transmitter. Simulation results demonstrate that the proposed scheme achieves a better average throughput than that serving a single stream per user when the number of active users is smaller than the number of transmit antennas and that it provides an average throughput close to that serving a single stream per user when the number of users is equal to the number of transmit antennas.

  • Segmentation of Arteries in Minimally Invasive Surgery Using Change Detection

    Hamed AKBARI  Yukio KOSUGI  Kazuyuki KOJIMA  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E92-D No:3
      Page(s):
    498-505

    In laparoscopic surgery, the lack of tactile sensation and 3D visual feedback make it difficult to identify the position of a blood vessel intraoperatively. An unintentional partial tear or complete rupture of a blood vessel may result in a serious complication; moreover, if the surgeon cannot manage this situation, open surgery will be necessary. Differentiation of arteries from veins and other structures and the ability to independently detect them has a variety of applications in surgical procedures involving the head, neck, lung, heart, abdomen, and extremities. We have used the artery's pulsatile movement to detect and differentiate arteries from veins. The algorithm for change detection in this study uses edge detection for unsupervised image registration. Changed regions are identified by subtracting the systolic and diastolic images. As a post-processing step, region properties, including color average, area, major and minor axis lengths, perimeter, and solidity, are used as inputs of the LVQ (Learning Vector Quantization) network. The output results in two object classes: arteries and non-artery regions. After post-processing, arteries can be detected in the laparoscopic field. The registration method used here is evaluated in comparison with other linear and nonlinear elastic methods. The performance of this method is evaluated for the detection of arteries in several laparoscopic surgeries on an animal model and on eleven human patients. The performance evaluation criteria are based on false negative and false positive rates. This algorithm is able to detect artery regions, even in cases where the arteries are obscured by other tissues.

  • Fast and Accurate Generalized Harmonic Analysis and Its Parallel Computation by GPU

    Hisayori NODA  Akinori NISHIHARA  

     
    PAPER

      Vol:
    E92-A No:3
      Page(s):
    745-752

    A fast and accurate method for Generalized Harmonic Analysis is proposed. The proposed method estimates the parameters of a sinusoid and subtracts it from a target signal one by one. The frequency of the sinusoid is estimated around a peak of Fourier spectrum using binary search. The binary search can control the trade-off between the frequency accuracy and the computation time. The amplitude and the phase are estimated to minimize the squared sum of the residue after extraction of estimated sinusoids from the target signal. Sinusoid parameters are recalculated to reduce errors introduced by the peak detection using windowed Discrete-Time Fourier Transform. Audio signals are analyzed by the proposed method, which confirms the accuracy compared to existing methods. The proposed algorithm has high degree of concurrency and is suitable to be implemented on Graphical Processing Unit (GPU). The computational throughput can be made higher than the input audio signal rate.

  • What Can We See behind Sampling Theorems?

    Hidemitsu OGAWA  

     
    INVITED PAPER

      Vol:
    E92-A No:3
      Page(s):
    688-695

    This paper shows that there is a fruitful world behind sampling theorems. For this purpose, the sampling problem is reformulated from a functional analytic standpoint, and is consequently revealed that the sampling problem is a kind of inverse problem. The sampling problem covers, for example, signal and image restoration including super resolution, image reconstruction from projections such as CT scanners in hospitals, and supervised learning such as learning in artificial neural networks. An optimal reconstruction operator is also given, providing the best approximation to an individual original signal without our knowing the original signal.

  • Hierarchical Composition of Self-Stabilizing Protocols Preserving the Fault-Containment Property

    Yukiko YAMAUCHI  Sayaka KAMEI  Fukuhito OOSHITA  Yoshiaki KATAYAMA  Hirotsugu KAKUGAWA  Toshimitsu MASUZAWA  

     
    PAPER-Distributed Cooperation and Agents

      Vol:
    E92-D No:3
      Page(s):
    451-459

    A desired property of large distributed systems is self adaptability against the faults that occur more frequently as the size of the distributed system grows. Self-stabilizing protocols provide autonomous recovery from finite number of transient faults. Fault-containing self-stabilizing protocols promise not only self-stabilization but also containment of faults (quick recovery and small effect) against small number of faults. However, existing composition techniques for self-stabilizing protocols (e.g. fair composition) cannot preserve the fault-containment property when composing fault-containing self-stabilizing protocols. In this paper, we present Recovery Waiting Fault-containing Composition (RWFC) framework that provides a composition of multiple fault-containing self-stabilizing protocols while preserving the fault-containment property of the source protocols.

  • Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor

    Eun-Hyon BAE  Kyun-Kyung LEE  

     
    LETTER-Antennas and Propagation

      Vol:
    E92-B No:3
      Page(s):
    1053-1056

    A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.

  • Training Set Selection for Building Compact and Efficient Language Models

    Keiji YASUDA  Hirofumi YAMAMOTO  Eiichiro SUMITA  

     
    PAPER-Natural Language Processing

      Vol:
    E92-D No:3
      Page(s):
    506-511

    For statistical language model training, target domain matched corpora are required. However, training corpora sometimes include both target domain matched and unmatched sentences. In such a case, training set selection is effective for both reducing model size and improving model performance. In this paper, training set selection method for statistical language model training is described. The method provides two advantages for training a language model. One is its capacity to improve the language model performance, and the other is its capacity to reduce computational loads for the language model. The method has four steps. 1) Sentence clustering is applied to all available corpora. 2) Language models are trained on each cluster. 3) Perplexity on the development set is calculated using the language models. 4) For the final language model training, we use the clusters whose language models yield low perplexities. The experimental results indicate that the language model trained on the data selected by our method gives lower perplexity on an open test set than a language model trained on all available corpora.

  • New Approach of Laser-SQUID Microscopy to LSI Failure Analysis Open Access

    Kiyoshi NIKAWA  Shouji INOUE  Tatsuoki NAGAISHI  Toru MATSUMOTO  Katsuyoshi MIURA  Koji NAKAMAE  

     
    INVITED PAPER

      Vol:
    E92-C No:3
      Page(s):
    327-333

    We have proposed and successfully demonstrated a two step method for localizing defects on an LSI chip. The first step is the same as a conventional laser-SQUID (L-SQUID) imaging where a SQUID and a laser beam are fixed during LSI chip scanning. The second step is a new L-SQUID imaging where a laser beam is stayed at the point, located in the first step results, during SQUID scanning. In the second step, a SQUID size (Aeff) and the distance between the SQUID and the LSI chip (ΔZ) are key factors limiting spatial resolution. In order to improve the spatial resolution, we have developed a micro-SQUID and the vacuum chamber housing both the micro-SQUID and the LSI chip. The Aeff of the micro-SQUID is a thousand of that of a conventional SQUID. The minimum value of ΔZ was successfully reduced to 25 µm by setting both the micro-SQUID and an LSI chip in the same vacuum chamber. The spatial resolution in the second step was shown to be 53 µm. Demonstration of actual complicated defects localization was succeeded, and this result suggests that the two step localization method is useful for LSI failure analysis.

  • Turbo Equalization of Nonlinear TDMA Satellite Signals

    Yen-Chih CHEN  Yu Ted SU  

     
    PAPER-Satellite Communications

      Vol:
    E92-B No:3
      Page(s):
    992-997

    In this paper, we investigate a coded solution to compensate for the nonlinear distortion of TDMA satellite waveforms. Based on a Volterra-type channel model and the turbo principle, we present a turbo-like system that includes a simple rate-1 encoder at the transmit side in addition to a conventional channel encoder; the receive side iteratively equalizes the nonlinear channel effect and decodes the received symbols. Some other design alternatives are also explored and computer simulated performance is presented. Numerical results show that significant improvement over conventional approaches can be achieved by the proposed turbo system.

  • Multiresolutional Gaussian Mixture Model for Precise and Stable Foreground Segmentation in Transform Domain

    Hiroaki TEZUKA  Takao NISHITANI  

     
    PAPER

      Vol:
    E92-A No:3
      Page(s):
    772-778

    This paper describes a multiresolutional Gaussian mixture model (GMM) for precise and stable foreground segmentation. A multiple block sizes GMM and a computationally efficient fine-to-coarse strategy, which are carried out in the Walsh transform (WT) domain, are newly introduced to the GMM scheme. By using a set of variable size block-based GMMs, a precise and stable processing is realized. Our fine-to-coarse strategy comes from the WT spectral nature, which drastically reduces the computational steps. In addition, the total computation amount of the proposed approach requires only less than 10% of the original pixel-based GMM approach. Experimental results show that our approach gives stable performance in many conditions, including dark foreground objects against light, global lighting changes, and scenery in heavy snow.

  • Visual Aerial Navigation through Adaptive Prediction and Hyper-Space Image Matching

    Muhammad Anwaar MANZAR  Tanweer Ahmad CHEEMA  Abdul JALIL  Ijaz Mansoor QURESHI  

     
    PAPER-Pattern Recognition

      Vol:
    E92-D No:2
      Page(s):
    283-297

    Image matching is an important area of research in the field of artificial intelligence, machine vision and visual navigation. This paper presents a new image matching scheme suitable for visual navigation. In this scheme, gray scale images are sliced and quantized to form sub-band binary images. The information in the binary images is then signaturized to form a vector space and the signatures are sorted as per significance. These sorted signatures are then normalized to transform the represented image pictorial features in a rotation and scale invariant form. For the image matching these two vector spaces from both the images are compared in the transformed domain. This comparison yields efficient results directly in the image spatial domain avoiding the need of image inverse transformation. As compared to the conventional correlation, this comparison avoids the wide range of square error calculations all over the image. In fact, it directly guides the solution to converge towards the estimate given by the adaptive prediction for a high speed performance in an aerial video sequence. A four dimensional solution population scheme has also been presented with a matching confidence factor. This factor helps in terminating the iterations when the essential matching conditions have been achieved. The proposed scheme gives robust and fast results for normal, scaled and rotated templates. Speed comparison with older techniques shows the computational viability of this new technique and its much lesser dependence on image size. The method also shows noise immunity at 30 dB AWGN and impulsive noise.

  • A Decentralized Multi-Group Key Management Scheme

    Junbeom HUR  Hyunsoo YOON  

     
    LETTER-Network Management/Operation

      Vol:
    E92-B No:2
      Page(s):
    632-635

    Scalability is one of the most important requirements for secure multicast in a multi-group environment. In this study, we propose a decentralized multi-group key management scheme that allows each multicast group sender to control the access to its group communication independently. Scalability is enhanced by local rekeying and inter-working among different subgroups. The group key secrecy and backward/forward secrecy are also guaranteed.

  • Trend of Autonomous Decentralized System Technologies and Their Application in IC Card Ticket System Open Access

    Kinji MORI  Akio SHIIBASHI  

     
    INVITED SURVEY PAPER

      Vol:
    E92-B No:2
      Page(s):
    445-460

    The advancement of technology is ensured by step-by-step innovation and its implementation into society. Autonomous Decentralized Systems (ADSs) have been growing since first proposed in 1977. Since then, the ADS technologies and their implementations have interacted with the evolving markets, sciences, and technologies. The ADS concept is proposed on biological analogy, and its technologies have been advanced according to changing and expanding requirements. These technologies are now categorized into six generations on the basis of requirements and system structures, but the ADS concept and its system architecture have not changed. The requirements for the system can be divided in operation-oriented, mass service-oriented, and personal service-oriented categories. Moreover, these technologies have been realized in homogeneous system structure and, as the next step, in heterogeneous system structure. These technologies have been widely applied in manufacturing, telecommunications, information provision/utilization, data centers, transportation, and so on. They have been operating successfully throughout the world. In particular, ADS technologies have been applied in Suica, the IC card ticket system (ICCTS) for fare collection and e-commerce. This system is not only expanding in size and functionality but also its components are being modified almost every day without stopping its operation. This system and its technologies are shown here. Finally, the future direction of ADS is discussed, and one of its technologies is presented.

  • Adaptive Packet Size Control Using Beta Distribution Mobility Estimation for Rapidly Changing Mobile Networks

    Dong-Chul GO  Jong-Moon CHUNG  Su Young LEE  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E92-B No:2
      Page(s):
    599-603

    An adaptive algorithm to optimize the packet size in wireless mobile networks with Gauss-Markov mobility is presented. The proposed control algorithm conducts adaptive packet size control for mobile terminals that experience relatively fast changing channel conditions, which could be caused by fast mobility or other rapidly changing interference conditions. Due to the fast changing channel conditions, the packet size controller uses short channel history for channel status estimation and takes advantage of a pre-calculated probability density function (PDF) of the distance of the mobile nodes in the estimation process. The packet size is adapted to maximize the communication performance through automatic repeat request (ARQ). The adaptive packet size controlling algorithm is based on an estimation of the channel error rate and the link statistics obtained from the mobility pattern. It was found that the distribution of the link distance among mobile nodes following the Markov-Gauss mobility pattern in a circular communication range well fits the Beta PDF. By adapting the Beta PDF from the mobility pattern, the results show that it is possible to estimate the channel condition more accurately and thereby improve the throughput and utilization performance in rapidly changing wireless mobile networking systems.

  • On the Infimum and Supremum of Fuzzy Inference by Single Input Type Fuzzy Inference

    Hirosato SEKI  Hiroaki ISHII  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E92-A No:2
      Page(s):
    611-617

    Fuzzy inference has played a significant role in many applications. Although the simplified fuzzy inference method is currently mostly used, the problem is that the number of fuzzy rules becomes very huge and so the setup and adjustment of fuzzy rules become difficult. On the other hand, Yubazaki et al. have proposed a "single input rule modules connected fuzzy inference method" (SIRMs method) whose final output is obtained by summarizing the product of the importance degrees and the inference results from single input fuzzy rule module. Seki et al. have shown that the simplified fuzzy inference method and the SIRMs method are equivalent when the sum of diagonal elements in rules of the simplified fuzzy inference method is equal to that of cross diagonal elements. This paper clarifies the conditions for the infimum and supremum of the fuzzy inference method using the single input type fuzzy inference method, from the view point of fuzzy inference.

2821-2840hit(5900hit)