The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] pruning(38hit)

21-38hit(38hit)

  • Efficient Beam Pruning for Speech Recognition with a Reward Considering the Potential to Reach Various Words on a Lexical Tree

    Tsuneo KATO  Kengo FUJITA  Nobuyuki NISHIZAWA  

     
    PAPER-Speech and Hearing

      Vol:
    E94-D No:6
      Page(s):
    1253-1259

    This paper presents efficient frame-synchronous beam pruning for HMM-based automatic speech recognition. In the conventional beam pruning, a few hypotheses that have greater potential to reach various words on a lexical tree are likely to be pruned out by a number of hypotheses that have limited potential, since all hypotheses are treated equally without considering this potential. To make the beam pruning less restrictive for hypotheses with greater potential and vice versa, the proposed method adds to the likelihood of each hypothesis a tentative reward as a monotonically increasing function of the number of reachable words from the HMM state where the hypothesis stays in a lexical tree. The reward is designed not to collapse the ASR probabilistic framework. The proposed method reduced 84% of the processing time for a grammar-based 10k-word short sentence recognition task. For a language-model-based dictation task, it also resulted in an additional 23% reduction in processing time from the beam pruning with the language model look-ahead technique.

  • An Efficient Statistical Pruning Algorithm for Fixed-Complexity Sphere Decoder

    Sheng LEI  Xin ZHANG  Cong XIONG  Dacheng YANG  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E94-B No:3
      Page(s):
    834-837

    We create an efficient statistical pruning (SP) algorithm for fixed-complexity sphere decoder (FSD) by utilizing partial decision feedback detection (i.e., SP-FSD). Simulation results show that SP-FSD not only attains the near-optimal performance, but also achieves much lower complexity than the original FSD and its two lately-developed variants: the simplified FSD (SFSD) and the statistical threshold-based FSD (ST-FSD).

  • Automatic Defect Classification System in Semiconductors EDS Test Based on System Entity Structure Methodology

    Young-Shin HAN  SoYoung KIM  TaeKyu KIM  Jason J. JUNG  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E93-D No:7
      Page(s):
    2001-2004

    We exploit a structural knowledge representation scheme called System Entity Structure (SES) methodology to represent and manage wafer failure patterns which can make a significant influence to FABs in the semiconductor industry. It is important for the engineers to simulate various system verification processes by using predefined system entities (e.g., decomposition, taxonomy, and coupling relationships of a system) contained in the SES. For better computational performance, given a certain failure pattern, a Pruned SES (PES) can be extracted by selecting the only relevant system entities from the SES. Therefore, the SES-based simulation system allows the engineers to efficiently evaluate and monitor semiconductor data by i) analyzing failures to find out the corresponding causes and ii) managing historical data related to such failures.

  • Moving Picture Coding by Lapped Transform and Edge Adaptive Deblocking Filter with Zero Pruning SPIHT

    Nasharuddin ZAINAL  Toshihisa TANAKA  Yukihiko YAMASHITA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E93-D No:6
      Page(s):
    1608-1617

    We propose a moving picture coding by lapped transform and an edge adaptive deblocking filter to reduce the blocking distortion. We apply subband coding (SBC) with lapped transform (LT) and zero pruning set partitioning in hierarchical trees (zpSPIHT) to encode the difference picture. Effective coding using zpSPIHT was achieved by quantizing and pruning the quantized zeros. The blocking distortion caused by block motion compensated prediction is reduced by an edge adaptive deblocking filter. Since the original edges can be detected precisely at the reference picture, an edge adaptive deblocking filter on the predicted picture is very effective. Experimental results show that blocking distortion has been visually reduced at very low bit rate coding and better PSNRs of about 1.0 dB was achieved.

  • A One-Pass Real-Time Decoder Using Memory-Efficient State Network

    Jian SHAO  Ta LI  Qingqing ZHANG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-ASR System Architecture

      Vol:
    E91-D No:3
      Page(s):
    529-537

    This paper presents our developed decoder which adopts the idea of statically optimizing part of the knowledge sources while handling the others dynamically. The lexicon, phonetic contexts and acoustic model are statically integrated to form a memory-efficient state network, while the language model (LM) is dynamically incorporated on the fly by means of extended tokens. The novelties of our approach for constructing the state network are (1) introducing two layers of dummy nodes to cluster the cross-word (CW) context dependent fan-in and fan-out triphones, (2) introducing a so-called "WI layer" to store the word identities and putting the nodes of this layer in the non-shared mid-part of the network, (3) optimizing the network at state level by a sufficient forward and backward node-merge process. The state network is organized as a multi-layer structure for distinct token propagation at each layer. By exploiting the characteristics of the state network, several techniques including LM look-ahead, LM cache and beam pruning are specially designed for search efficiency. Especially in beam pruning, a layer-dependent pruning method is proposed to further reduce the search space. The layer-dependent pruning takes account of the neck-like characteristics of WI layer and the reduced variety of word endings, which enables tighter beam without introducing much search errors. In addition, other techniques including LM compression, lattice-based bookkeeping and lattice garbage collection are also employed to reduce the memory requirements. Experiments are carried out on a Mandarin spontaneous speech recognition task where the decoder involves a trigram LM and CW triphone models. A comparison with HDecode of HTK toolkits shows that, within 1% performance deviation, our decoder can run 5 times faster with half of the memory footprint.

  • Pruned Resampling: Probabilistic Model Selection Schemes for Sequential Face Recognition

    Atsushi MATSUI  Simon CLIPPINGDALE  Takashi MATSUMOTO  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1151-1159

    This paper proposes probabilistic pruning techniques for a Bayesian video face recognition system. The system selects the most probable face model using model posterior distributions, which can be calculated using a Sequential Monte Carlo (SMC) method. A combination of two new pruning schemes at the resampling stage significantly boosts computational efficiency by comparison with the original online learning algorithm. Experimental results demonstrate that this approach achieves better performance in terms of both processing time and ID error rate than a contrasting approach with a temporal decay scheme.

  • Assessment of On-Line Model Quality and Threshold Estimation in Speaker Verification

    Javier R. SAETA  Javier HERNANDO  

     
    PAPER-Speech and Hearing

      Vol:
    E90-D No:4
      Page(s):
    759-765

    The selection of the most representative utterances coming from a speaker is essential for the right performance of automatic enrollment in speaker verification. Model quality measures and threshold estimation methods mainly deal with the scarcity of data and the difficulty of obtaining data from impostors in real applications. Conventional methods estimate the quality of the training utterances once the model is created. In such case, it is not possible to ask the user for more utterances during the training session if necessary. A new training session must be started. That was especially unusable in applications where only one or two enrolment sessions were allowed. In this paper, a new on-line quality method based on a male and a female Universal Background Model (UBM) is introduced. The two models act as a reference for new utterances and show if they belong to the same speaker and provide a measure of its quality at the same time. On the other hand, the estimation of the verification threshold is also strongly influenced by the previous selection of the speaker's utterances. In this context, potential outliers, i.e., those client scores which are distant with regard to mean, could lead to wrong mean and variance client estimations. To alleviate this problem, some efficient threshold estimation methods based on removing or weighting scores are proposed here. Before estimating the threshold, the client scores catalogued as outliers are removed, pruned or weighted, improving subsequent estimations. Text-dependent experiments have been carried out by using a telephonic multi-session database in Spanish. The database has been recorded by the authors and has 184 speakers.

  • Pruning-Based Unsupervised Segmentation for Korean

    In-Su KANG  Seung-Hoon NA  Jong-Hyeok LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E89-D No:10
      Page(s):
    2670-2677

    Compound noun segmentation is a key component for Korean language processing. Supervised approaches require some types of human intervention such as maintaining lexicons, manually segmenting the corpora, or devising heuristic rules. Thus, they suffer from the unknown word problem, and cannot distinguish domain-oriented or corpus-directed segmentation results from the others. These problems can be overcome by unsupervised approaches that employ segmentation clues obtained purely from a raw corpus. However, most unsupervised approaches require tuning of empirical parameters or learning of the statistical dictionary. To develop a tuning-less, learning-free unsupervised segmentation algorithm, this study proposes a pruning-based unsupervised technique that eliminates unhelpful segmentation candidates. In addition, unlike previous unsupervised methods that have relied on purely character-based segmentation clues, this study utilizes word-based segmentation clues. Experimental evaluations show that the pruning scheme is very effective to unsupervised segmentation of Korean compound nouns, and the use of word-based prior knowledge enables better segmentation accuracy. This study also shows that the proposed algorithm performs competitively with or better than other unsupervised methods.

  • A Plan-Generation-Evaluation Framework for Design Space Exploration of Digital Systems Design

    Jun Kyoung KIM  Tag Gon KIM  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E89-A No:3
      Page(s):
    772-781

    Modern digital systems design requires us to explore a large and complex design space to find a best configuration which satisfies design requirements. Such exploration requires a sound representation of design space from which design candidates are efficiently generated, each of which then is evaluated. This paper proposes a plan-generation-evaluation framework which supports a complete process of such design space exploration. The plan phase constitutes a design space of all possible design alternatives by means of a formally defined representation scheme of attributed AND-OR graph. The generation phase generates a set of candidates by algorithmic pruning of the design space in an attributed AND-OR graph with respect to design requirements as well as architectural constraints. Finally, the evaluation phase measures performance of design candidates in a pruned graph to select a best one. A complete process of cache design is exemplified to show the effectiveness of the proposed framework.

  • A Pruning Algorithm for Training Cooperative Neural Network Ensembles

    Md. SHAHJAHAN  Kazuyuki MURASE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E89-D No:3
      Page(s):
    1257-1269

    We present a training algorithm to create a neural network (NN) ensemble that performs classification tasks. It employs a competitive decay of hidden nodes in the component NNs as well as a selective deletion of NNs in ensemble, thus named a pruning algorithm for NN ensembles (PNNE). A node cooperation function of hidden nodes in each NN is introduced in order to support the decaying process. The training is based on the negative correlation learning that ensures diversity among the component NNs in ensemble. The less important networks are deleted by a criterion that indicates over-fitting. The PNNE has been tested extensively on a number of standard benchmark problems in machine learning, including the Australian credit card assessment, breast cancer, circle-in-the-square, diabetes, glass identification, ionosphere, iris identification, and soybean identification problems. The results show that classification performances of NN ensemble produced by the PNNE are better than or competitive to those by the conventional constructive and fixed architecture algorithms. Furthermore, in comparison to the constructive algorithm, NN ensemble produced by the PNNE consists of a smaller number of component NNs, and they are more diverse owing to the uniform training for all component NNs.

  • On-Line Pruning of ZBDD for Path Delay Fault Coverage Calculation

    Fatih KOCAN  Mehmet H. GUNES  Atakan KURT  

     
    PAPER-Programmable Logic, VLSI, CAD and Layout

      Vol:
    E88-D No:7
      Page(s):
    1381-1388

    Zero-suppressed BDDs (ZBDDs) have been used in the nonenumerative path delay fault (PDF) grading of VLSI circuits. One basic and one cut-based grading algorithm are proposed to grade circuits with polynomial and exponential number of PDFs, respectively. In this article, we present a new ZBDD-based basic PDF grading algorithm to enable grading of some circuits with exponential number of PDFs without using the cut-based algorithm. The algorithm overcomes the memory overflow problems by dynamically pruning the ZBDD at run-time. This new algorithm may give exact or pessimistic coverage depending on the statuses of the pruned nodes. Furthermore, we re-assess the performance of the static variable ordering heuristics in ZBDDs for PDF coverage calculation. The proposed algorithm combined with the efficient static variable ordering heuristics can avoid ZBDD size explosion in many circuits. Experimental results for ISCAS85 benchmarks show that the proposed algorithm efficiently grades circuits.

  • Pruning Rule for kMER-Based Acquisition of the Global Topographic Feature Map

    Eiji UCHINO  Noriaki SUETAKE  Chuhei ISHIGAKI  

     
    LETTER-Biocybernetics, Neurocomputing

      Vol:
    E88-D No:3
      Page(s):
    675-678

    For a kernel-based topographic map formation, kMER (kernel-based maximum entropy learning rule) was proposed by Van Hulle, and some effective learning rules related to kMER have been proposed so far with many applications. However, no discusions have been made concerning the determination of the number of units in kMER. This letter describes a unit-pruning rule, which permits automatic contruction of an appropriate-sized map to acquire the global topographic features underlying the input data. The effectiveness and the validity of the present rule have been confirmed by some preliminary computer simulations.

  • Autonomous Clustering Scheme for Wireless Sensor Networks Using Coverage Estimation-Based Self-Pruning

    Kichan BAE  Hyunsoo YOON  

     
    PAPER-Network

      Vol:
    E88-B No:3
      Page(s):
    973-980

    Energy-efficient operations are essential to prolonging the lifetime of wireless sensor networks. Clustering sensor nodes is one approach that can reduce energy consumption by aggregating data, controlling transmission power levels, and putting redundant sensor nodes to sleep. To distribute the role of a cluster head, clustering approaches should be based on efficient cluster configuration schemes. Therefore, low overhead in the cluster configuration process is one of the key constraints for energy-efficient clustering. In this paper, we present an autonomous clustering approach using a coverage estimation-based self-pruning algorithm. Our strategy for clustering is to allow the best candidate node within its own cluster range to declare itself as a cluster head and to dominate the other nodes in the range. This same self-declaration strategy is also used in the active sensor election process. As a result, the proposed scheme can minimize clustering overheads by obviating both the requirements of collecting neighbor information beforehand and the iterative negotiating steps of electing cluster heads. The proposed scheme allows any type of sensor network application, including spatial query execution or periodic environment monitoring, to operate in an energy-efficient manner.

  • Self-Organizing Neural Networks by Construction and Pruning

    Jong-Seok LEE  Hajoon LEE  Jae-Young KIM  Dongkyung NAM  Cheol Hoon PARK  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E87-D No:11
      Page(s):
    2489-2498

    Feedforward neural networks have been successfully developed and applied in many areas because of their universal approximation capability. However, there still remains the problem of determining a suitable network structure for the given task. In this paper, we propose a novel self-organizing neural network which automatically adjusts its structure according to the task. Utilizing both the constructive and the pruning procedures, the proposed algorithm finds a near-optimal network which is compact and shows good generalization performance. One of its important features is reliability, which means the randomness of neural networks is effectively reduced. The resultant networks can have suitable numbers of hidden neurons and hidden layers according to the complexity of the given task. The simulation results for the well-known function regression problems show that our method successfully organizes near-optimal networks.

  • Batch-Incremental Nearest Neighbor Search Algorithm and Its Performance Evaluation

    Yaokai FENG  Akifumi MAKINOUCHI  

     
    PAPER-Databases

      Vol:
    E86-D No:9
      Page(s):
    1856-1867

    In light of the increasing number of computer applications that rely heavily on multimedia data, the database community has focused on the management and retrieval of multidimensional data. Nearest Neighbor queries (NN queries) have been widely used to perform content-based retrieval (e.g., similarity search) in multimedia applications. Incremental NN (INN) query is a kind of NN queries and can also be used when the number of the NN objects to be retrieved is not known in advance. This paper points out the weaknesses of the existing INN search algorithms and proposes a new one, called Batch-Incremental Nearest Neighbor search algorithm (denoted B-INN search algorithm), which can be used to process the INN query efficiently. The B-INN search algorithm is different from the existing INN search algorithms in that it does not employ the priority queue that is used in the existing INN search algorithms and is very CPU and memory intensive for large databases in high-dimensional spaces. And it incrementally reports b(b > 1) objects simultaneously (Batch-Incremental), whereas the existing INN search algorithms report the neighbors one by one. In order to implement the B-INN search, a new search (called k-d-NN search) with a new pruning strategy is proposed. Performance tests indicate that the B-INN search algorithm clearly outperforms the existing INN search algorithms in high-dimensional spaces.

  • A Dynamic Node Decaying Method for Pruning Artificial Neural Networks

    Md. SHAHJAHAN  Kazuyuki MURASE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E86-D No:4
      Page(s):
    736-751

    This paper presents a dynamic node decaying method (DNDM) for layered artificial neural networks that is suitable for classification problems. Our purpose is not to minimize the total output error but to obtain high generalization ability with minimal structure. Users of the conventional back propagation (BP) learning algorithm can convert their program to the DNDM by simply inserting a few lines. This method is an extension of a previously proposed method to more general classification problems, and its validity is tested with recent standard benchmark problems. In addition, we analyzed the training process and the effects of various parameters. In the method, nodes in a layer compete for survival in an automatic process that uses a criterion. Relatively less important nodes are decayed gradually during BP learning while more important ones play larger roles until the best performance under given conditions is achieved. The criterion evaluates each node by its total influence on progress toward the upper layer, and it is used as the index for dynamic competitive decaying. Two additional criteria are used: Generalization Loss to measure over-fitting and Learning Progress to stop training. Determination of these criteria requires a few human interventions. We have applied this algorithm to several standard benchmark problems such as cancer, diabetes, heart disease, glass, and iris problems. The results show the effectiveness of the method. The classification error and size of the generated networks are comparable to those obtained by other methods that generally require larger modification, or complete rewriting, of the program from the conventional BP algorithm.

  • Neural Network Model Switching for Efficient Feature Extraction

    Keisuke KAMEYAMA  Yukio KOSUGI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:10
      Page(s):
    1372-1383

    In order to improve the efficiency of the feature extraction of backpropagation (BP) learning in layered neural networks, model switching for changing the function model without altering the map is proposed. Model switching involves map preserving reduction of units by channel fusion, or addition of units by channel installation. For reducing the model size by channel fusion, two criteria for detection of the redundant channels are addressed, and the local link weight compensations for map preservation are formulated. The upper limits of the discrepancies between the maps of the switched models are derived for use as the unified criterion in selecting the switching model candidate. In the experiments, model switching is used during the BP training of a layered network model for image texture classification, to aid its inefficiency of feature extraction. The results showed that fusion and re-installation of redundant channels, weight compensations on channel fusion for map preservation, and the use of the unified criterion for model selection are all effective for improved generalization ability and quick learning. Further, the possibility of using model switching for concurrent optimization of the model and the map will be discussed.

  • A Modified Information Criterion for Automatic Model and Parameter Selection in Neural Network Learning

    Sumio WATANABE  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E78-D No:4
      Page(s):
    490-499

    This paper proposes a practical training algorithm for artificial neural networks, by which both the optimally pruned model and the optimally trained parameter for the minimum prediction error can be found simultaneously. In the proposed algorithm, the conventional information criterion is modified into a differentiable function of weight parameters, and then it is minimized while being controlled back to the conventional form. Since this method has several theoretical problems, its effectiveness is examined by computer simulations and by an application to practical ultrasonic image reconstruction.

21-38hit(38hit)