The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] extraction(301hit)

181-200hit(301hit)

  • A Practical Approach for Efficiently Extracting Interconnect Capacitances with Floating Dummy Fills

    Atsushi KUROKAWA  Toshiki KANAMOTO  Akira KASEBE  Yasuaki INOUE  Hiroo MASUDA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E88-A No:11
      Page(s):
    3180-3187

    We present a practical method of dealing with the influences of floating dummy metal fills, which are inserted to assist planarization by chemical-mechanical polishing (CMP) process, in extracting interconnect capacitances for system-on-chip (SoC) designs. The method is based on reducing the thicknesses of dummy metal layers according to electrical field theory. We also clarify the influences of dummy metal fills on the parasitic capacitance, signal delay, and crosstalk noise. Moreover, we address that interlayer dummy metal fills have more significant influences than intralayer ones in terms of the impact on coupling capacitances. When dummy metal fills are ignored, the error of capacitance extraction can be more than 30%, whereas the error of the proposed method is less than about 10% for many practical geometries. We also demonstrate, by comparison with capacitance results measured for a 90-nm test chip, that the error of the proposed method is less than 8%.

  • Neural Network Rule Extraction by Using the Genetic Programming and Its Applications to Explanatory Classifications

    Shozo TOKINAGA  Jianjun LU  Yoshikazu IKEDA  

     
    PAPER

      Vol:
    E88-A No:10
      Page(s):
    2627-2635

    This paper deals with the use of neural network rule extraction techniques based on the Genetic Programming (GP) to build intelligent and explanatory evaluation systems. Recent development in algorithms that extract rules from trained neural networks enable us to generate classification rules in spite of their intrinsically black-box nature. However, in the original decompositional method looking at the internal structure of the networks, the comprehensive methods combining the output to the inputs using parameters are complicated. Then, in our paper, we utilized the GP to automatize the rule extraction process in the trained neural networks where the statements changed into a binary classification. Even though the production (classification) rule generation based on the GP alone are applicable straightforward to the underlying problems for decision making, but in the original GP method production rules include many statements described by arithmetic expressions as well as basic logical expressions, and it makes the rule generation process very complicated. Therefore, we utilize the neural network and binary classification to obtain simple and relevant classification rules in real applications by avoiding straightforward applications of the GP procedure to the arithmetic expressions. At first, the pruning process of weight among neurons is applied to obtain simple but substantial binary expressions which are used as statements is classification rules. Then, the GP is applied to generate ultimate rules. As applications, we generate rules to prediction of bankruptcy and creditworthiness for binary classifications, and the apply the method to multi-level classification of corporate bonds (rating) by using the financial indicators.

  • Personal Identification Using Footstep Detection in In-Door Environment

    Yasuhiro SHOJI  Akitoshi ITAI  Hiroshi YASUKAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E88-A No:8
      Page(s):
    2072-2077

    Footsteps, with different shoes of heels, sneakers, leathers or even bare footed, will appear in different grounds of concrete, wood, etc. If a footstep is discriminable, the application to various fields can be considered. In this paper, the feature extraction of a footstep is investigated. We focus on the recognizing a certain kind of footstep waveforms under the restricted condition. We propose a new methodology using the feature parameter such as the peak frequency set by the mel-cepstrum analysis, the walking intervals and the similarity of spectrum envelope. It is shown for personal identification that the performance of the proposed method is effective.

  • Handwritten Numeral String Recognition: Effects of Character Normalization and Feature Extraction

    Cheng-Lin LIU  Hiroshi SAKO  Hiromichi FUJISAWA  

     
    PAPER-String Recognition

      Vol:
    E88-D No:8
      Page(s):
    1791-1798

    The performance of integrated segmentation and recognition of handwritten numeral strings relies on the classification accuracy and the non-character resistance of the underlying character classifier, which is variable depending on the techniques of pattern normalization, feature extraction, and classifier structure. In this paper, we evaluate the effects of 12 normalization functions and four selected feature types on numeral string recognition. Slant correction (deslant) is combined with the normalization functions and features so as to create 96 feature vectors, which are classified using two classifier structures. In experiments on numeral string images of the NIST Special Database 19, the classifiers have yielded very high string recognition accuracies. We show the superiority of moment normalization with adaptive aspect ratio mapping and the gradient direction feature, and observed that slant correction is beneficial to string recognition when combined with good normalization methods.

  • Semi-Automatic Video Object Segmentation Using LVQ with Color and Spatial Features

    Hariadi MOCHAMAD  Hui Chien LOY  Takafumi AOKI  

     
    PAPER-Image Processing and Multimedia Systems

      Vol:
    E88-D No:7
      Page(s):
    1553-1560

    This paper presents a semi-automatic algorithm for video object segmentation. Our algorithm assumes the use of multiple key video frames in which a semantic object of interest is defined in advance with human assistance. For video frames between every two key frames, the specified video object is tracked and segmented automatically using Learning Vector Quantization (LVQ). Each pixel of a video frame is represented by a 5-dimensional feature vector integrating spatial and color information. We introduce a parameter K to adjust the balance of spatial and color information. Experimental results demonstrate that the algorithm can segment the video object consistently with less than 2% average error when the object is moving at a moderate speed.

  • Decomposition of Surface Data into Fractal Signals Based on Mean Likelihood and Importance Sampling and Its Applications to Feature Extraction

    Shozo TOKINAGA  Noboru TAKAGI  

     
    PAPER-Digital Signal Processing

      Vol:
    E88-A No:7
      Page(s):
    1946-1956

    This paper deals with the decomposition of surface data into several fractal signal based on the parameter estimation by the Mean Likelihood and Importance Sampling (IS) based on the Monte Carlo simulations. The method is applied to the feature extraction of surface data. Assuming the stochastic models for generating the surface, the likelihood function is defined by using wavelet coefficients and the parameter are estimated based on the mean likelihood by using the IS. The approximation of the wavelet coefficients is used for estimation as well as the statistics defined for the variances of wavelet coefficients, and the likelihood function is modified by the approximation. After completing the decomposition of underlying surface data into several fractal surface, the prediction method for the fractal signal is employed based on the scale expansion based on the self-similarity of fractal geometry. After discussing the effect of additive noise, the method is applied to the feature extraction of real distribution of surface data such as the cloud and earthquakes.

  • Anchor Frame Detection in News Video Using Anchor Object Extraction

    Ki Tae PARK  Doo Sun HWANG  Young Shik MOON  

     
    LETTER

      Vol:
    E88-A No:6
      Page(s):
    1525-1528

    In this paper, an algorithm for anchor frame detection in news video is proposed, which consists of four steps. First, the cumulative histogram method is used to detect shot boundaries in order to segment a news video into video shots. Second, skin color information is used to detect face regions in each video shot. Third, color information of upper body regions is used to extract anchor object. Then, a graph-theoretic cluster analysis algorithm is utilized to classify the news video into anchor-person shots and non-anchor shots. Experimental results have shown the effectiveness of the proposed algorithm.

  • Extracting Partial Parsing Rules from Tree-Annotated Corpus: Toward Deterministic Global Parsing

    Myung-Seok CHOI  Kong-Joo LEE  Key-Sun CHOI  Gil Chang KIM  

     
    PAPER-Natural Language Processing

      Vol:
    E88-D No:6
      Page(s):
    1248-1255

    It is not always possible to find a global parse for an input sentence owing to problems such as errors of a sentence, incompleteness of lexicon and grammar. Partial parsing is an alternative approach to respond to these problems. Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties in partial parsing is how the grammar might be automatically extracted. In this paper we present a method of automatically extracting partial parsing rules from a tree-annotated corpus using the decision tree method. Our goal is deterministic global parsing using partial parsing rules, in other words, to extract partial parsing rules with higher accuracy and broader expansion. First, we define a rule template that enables to learn a subtree for a given substring, so that the resultant rules can be more specific and stricter to apply. Second, rule candidates extracted from a training corpus are enriched with contextual and lexical information using the decision tree method and verified through cross-validation. Last, we underspecify non-deterministic rules by merging substructures with ambiguity in those rules. The learned grammar is similar to phrase structure grammar with contextual and lexical information, but allows building structures of depth one or more. Thanks to automatic learning, the partial parsing rules can be consistent and domain-independent. Partial parsing with this grammar processes an input sentence deterministically using longest-match heuristics, and recursively applies rules to an input sentence. The experiments showed that the partial parser using automatically extracted rules is not only accurate and efficient but also achieves reasonable coverage for Korean.

  • Screen Pattern Removal for Character Pattern Extraction from High-Resolution Color Document Images

    Hideaki GOTO  Hirotomo ASO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E88-D No:6
      Page(s):
    1310-1313

    Screen pattern used in offset-printed documents has been one of great obstacles in developing document recognition systems that handle color documents. This paper proposes a selective smoothing method for filtering the screen patterns/noise in high-resolution color document images. Experimental results show that the method yields significant improvements in character pattern extraction.

  • A New Inductance Extraction Technique of On-Wafer Spiral Inductor Based on Analytical Interconnect Formula

    Hideki SHIMA  Toshimasa MATSUOKA  Kenji TANIGUCHI  

     
    PAPER

      Vol:
    E88-C No:5
      Page(s):
    824-828

    A new inductance extraction technique of spiral inductor from measurement fixture is presented. We propose a scalable expression of parasitic inductance for interconnects, and design consideration of test structure accommodating spiral inductor. The simple expression includes mutual inductance between the interconnects with high accuracy. The formula matches a commercial field solver inductance values within 1.4%. The layout of the test structure to reduce magnetic coupling between the spiral and the interconnects allows us to extract the intrinsic inductance of spiral more accurately. The proposed technique requires neither special fixture used for measurement-based method nor skilled worker for precise extraction with the analytical technique used.

  • Feature Extraction with Combination of HMT-Based Denoising and Weighted Filter Bank Analysis for Robust Speech Recognition

    Sungyun JUNG  Jongmok SON  Keunsung BAE  

     
    LETTER

      Vol:
    E88-D No:3
      Page(s):
    435-438

    In this paper, we propose a new feature extraction method that combines both HMT-based denoising and weighted filter bank analysis for robust speech recognition. The proposed method is made up of two stages in cascade. The first stage is denoising process based on the wavelet domain Hidden Markov Tree model, and the second one is the filter bank analysis with weighting coefficients obtained from the residual noise in the first stage. To evaluate performance of the proposed method, recognition experiments were carried out for additive white Gaussian and pink noise with signal-to-noise ratio from 25 dB to 0 dB. Experiment results demonstrate the superiority of the proposed method to the conventional ones.

  • Applying Sparse KPCA for Feature Extraction in Speech Recognition

    Amaro LIMA  Heiga ZEN  Yoshihiko NANKAKU  Keiichi TOKUDA  Tadashi KITAMURA  Fernando G. RESENDE  

     
    PAPER-Feature Extraction and Acoustic Medelings

      Vol:
    E88-D No:3
      Page(s):
    401-409

    This paper presents an analysis of the applicability of Sparse Kernel Principal Component Analysis (SKPCA) for feature extraction in speech recognition, as well as, a proposed approach to make the SKPCA technique realizable for a large amount of training data, which is an usual context in speech recognition systems. Although the KPCA (Kernel Principal Component Analysis) has proved to be an efficient technique for being applied to speech recognition, it has the disadvantage of requiring training data reduction, when its amount is excessively large. This data reduction is important to avoid computational unfeasibility and/or an extremely high computational burden related to the feature representation step of the training and the test data evaluations. The standard approach to perform this data reduction is to randomly choose frames from the original data set, which does not necessarily provide a good statistical representation of the original data set. In order to solve this problem a likelihood related re-estimation procedure was applied to the KPCA framework, thus creating the SKPCA, which nevertheless is not realizable for large training databases. The proposed approach consists in clustering the training data and applying to these clusters a SKPCA like data reduction technique generating the reduced data clusters. These reduced data clusters are merged and reduced in a recursive procedure until just one cluster is obtained, making the SKPCA approach realizable for a large amount of training data. The experimental results show the efficiency of SKPCA technique with the proposed approach over the KPCA with the standard sparse solution using randomly chosen frames and the standard feature extraction techniques.

  • Edge-Based Morphological Processing for Efficient and Accurate Video Object Extraction

    Yih-Haw JAN  David W. LIN  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E88-D No:2
      Page(s):
    335-340

    We consider the edge-linking approach for accurate locating of moving object boundaries in video segmentation. We review the existing methods and propose a scheme designed for efficiency and better accuracy. The scheme first obtains a very rough outline of an object by a suitable means, e.g., change detection. It then forms a relatively compact image region that properly contains the object, through a procedure termed "mask sketch." Finally, the outermost edges in the region are found and linked via a shortest-path algorithm. Experiments show that the scheme yields good performance.

  • The Extraction of Circles from Arcs Represented by Extended Digital Lines

    Euijin KIM  Miki HASEYAMA  Hideo KITAJIMA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E88-D No:2
      Page(s):
    252-267

    This paper presents a new fast and robust circle extraction method that is capable of extracting circles from images with complicated backgrounds. It is not based on the Hough transform (HT) that requires a time-consuming voting process. The proposed method uses a least-squares circle fitting algorithm for extracting circles. The arcs are fitted by extended digital lines that are extracted by a fast line extraction method. The proposed method calculates accurate circle parameters using the fitted arcs instead of evidence histograms in the parameter space. Tests performed on various real-world images show that the proposed method quickly and accurately extracts circles from complicated and heavily corrupted images.

  • Improved Boundary Element Method for Fast 3-D Interconnect Resistance Extraction

    Xiren WANG  Deyan LIU  Wenjian YU  Zeyi WANG  

     
    PAPER-Microwaves, Millimeter-Waves

      Vol:
    E88-C No:2
      Page(s):
    232-240

    Efficient extraction of interconnect parasitic parameters has become very important for present deep submicron designs. In this paper, the improved boundary element method (BEM) is presented for 3-D interconnect resistance extraction. The BEM is accelerated by the recently proposed quasi-multiple medium (QMM) technology, which quasi-cuts the calculated region to enlarge the sparsity of the overall coefficient matrix to solve. An un-average quasi-cutting scheme for QMM, advanced nonuniform element partition and technique of employing the linear element for some special surfaces are proposed. These improvements considerably condense the computational resource of the QMM-based BEM without loss of accuracy. Experiments on actual layout cases show that the presented method is several hundred to several thousand times faster than the well-known commercial software Raphael, while preserving the high accuracy.

  • Pixel-Level Color Demodulation Image Sensor for Support of Image Recognition

    Yusuke OIKE  Makoto IKEDA  Kunihiro ASADA  

     
    PAPER-Electronic Circuits

      Vol:
    E87-C No:12
      Page(s):
    2164-2171

    In this paper, we present a pixel-level color image sensor with efficient ambient light suppression using a modulated RGB flashlight to support a recognition system. The image sensor employs bidirectional photocurrent integrators for pixel-level demodulation and ambient light suppression. It demodulates a projected flashlight with suppression of an ambient light at short intervals during an exposure period. In the imaging system using an RGB modulated flashlight, every pixel provides innate color and depth information of a target object for color-based categorization and depth-key object extraction. We have designed and fabricated a prototype chip with 6464 pixels using a 0.35 µm CMOS process. Color image reconstruction and time-of-flight range finding have been performed for the feasibility test.

  • On the Use of Kernel PCA for Feature Extraction in Speech Recognition

    Amaro LIMA  Heiga ZEN  Yoshihiko NANKAKU  Chiyomi MIYAJIMA  Keiichi TOKUDA  Tadashi KITAMURA  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:12
      Page(s):
    2802-2811

    This paper describes an approach to feature extraction in speech recognition systems using kernel principal component analysis (KPCA). This approach represents speech features as the projection of the mel-cepstral coefficients mapped into a feature space via a non-linear mapping onto the principal components. The non-linear mapping is implicitly performed using the kernel-trick, which is a useful way of not mapping the input space into a feature space explicitly, making this mapping computationally feasible. It is shown that the application of dynamic (Δ) and acceleration (ΔΔ) coefficients, before and/or after the KPCA feature extraction procedure, is essential in order to obtain higher classification performance. Better results were obtained by using this approach when compared to the standard technique.

  • Automatic Extraction of Layout-Dependent Substrate Effects for RF MOSFET Modeling

    Zhao LI  Ravikanth SURAVARAPU  Kartikeya MAYARAM  C.-J. Richard SHI  

     
    PAPER-Device Modeling

      Vol:
    E87-A No:12
      Page(s):
    3309-3317

    This paper presents CrtSmile--a CAD tool for the automatic extraction of layout-dependent substrate effects for RF MOSFET modeling. CrtSmile incorporates a new scalable substrate model, which depends not only on the geometric layout information of a transistor (the number of gate fingers, finger width, channel length and bulk contact location), but also on the transistor layout and bulk patterns. We show that this model is simple to extract and has good agreement with measured data for a 0.35 µm CMOS process. CrtSmile reads in the layout information of RF transistors in the CIF/GDSII format, performs a pattern-based layout extraction to recognize the transistor layout and bulk patterns. A scalable layout-dependent substrate model is automatically generated and attached to the standard BSIM3 device model as a sub-circuit for use in circuit simulation. A low noise amplifier is evaluated with the proposed CrtSmile tool, showing the importance of layout effects for RF transistor substrate modeling.

  • On the Use of Shanks Transformation to Accelerate Capacitance Extraction for Periodic Structures

    Ye LIU  Zheng-Fan LI  Mei XUE  Rui-Feng XUE  

     
    LETTER-Electromagnetic Theory

      Vol:
    E87-C No:6
      Page(s):
    1078-1081

    Integral equation method is used to compute three-dimension-structure capacitance in this paper. Since some multi-conductor structures present regular periodic property, the periodic cell is used to reduce the computational domain with adding appropriate magnetic and electric walls. The periodic Green's function in the integral equation method is represented in the form of infinite series with slow convergence. In this paper, Shanks transformation is used to accelerate the convergence. Numerical examples show that the proposed method is accurate with a much higher efficiency in capacitance extraction for 3-D periodic structures.

  • NTM-Agent: Text Mining Agent for Net Auction

    Yukitaka KUSUMURA  Yoshinori HIJIKATA  Shogo NISHIDA  

     
    PAPER

      Vol:
    E87-D No:6
      Page(s):
    1386-1396

    Net auctions have been widely utilized with the recent development of the Internet. However, it is a problem that there are too many items for bidders to select the most suitable one. We aim at supporting the bidders on net auctions by automatically generating a table which contains the features of several items for comparison. We construct a system called NTM-Agent (Net auction Text Mining Agent). The system collects web pages of items and extracts the items' features from the pages. After that, it generates a table which contains the extracted features. This research focuses on two problems in the process. The first problem is that if the system collects items automatically, the results contain the items which is different from the items of the user's target. The second problem is that the descriptions in net auctions are not uniform (There are different formats such as sentences, items and tables. The subjects of some sentences are omitted. ). Therefore, it is difficult to extract the information from the descriptions by conventional methods of information extraction. This research proposes methods to solve the problems. For the first problem, NTM-Agent filters the items by correlation rules about the keywords in the titles and the item descriptions. These rules are created semi-automatically by a support tool. For the second problem, NTM-Agent extracts the information by distinguishing the formats. It also learns the feature values from plain examples for the future extraction.

181-200hit(301hit)