The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E79-D No.5  (Publication Date:1996/05/25)

    Special Issue on Character Recognition and Document Understanding
  • FOREWORD

    Shunji MORI  

     
    FOREWORD

      Page(s):
    399-400
  • Advances in Recognition Methods for Handwritten Kanji Characters

    Michio UMEDA  

     
    INVITED PAPER

      Page(s):
    401-410

    This paper describes advances in the study of handwritten Kanji character recognition mainly performed in Japan. The research focus has shifted from the investigation of the possibility of recognition by the stroke structure analysis method to the study of the feasibility of recognition by the feature matching methods. A great number of features and their extraction methods have been proposed according to this approach. On the other hand, studies on pattern matching methods of recognizing Kanji characters using the character pattern itself have been made. The research efforts based on these two approaches have led to the empirical fact that handwritten Kanji character recognition would become more effective by paying greater attention to the feature of directionality. Furthermore, in an effort to achieve recognition with higher precision, active research work has been carried out on pre-processing techniques, such as the forced reshaping of input pattern, the development of more effective features, and nonlinear flexible matching algorithms. In spite of these efforts, the current character recognition techniques represent only a skill of guessing characters" and are still on an insufficient technical level. Subsequent studies on character recognition must address the question of how to understand characters".

  • Handprinted Kanji OCR Development --What was Solved in Handprinted Kanji Character Recognition? --

    Jun TSUKUMO  

     
    INVITED PAPER

      Page(s):
    411-416

    Handprinted Kanji character recognition was the most important research topic for 1980s. Today there are several commercial products of handprinted Kanji OCR, but there are many unsolved problems. In this paper three types of research approach are focused for handprinted Kanji character recognition. The first approach is directional pattern matching, which is generally applied to handprinted Kanji character recognition. The second approach is potential field feature extraction, which activated Kanji character recognition research in the early stage of research. The third approach is shape matching. This paper surveys these research approaches, and both contributions and problems of them are discussed.

  • Present State of Recognition Method on Consideration of Neighbor Points and Its Ability in Common Database

    Kazuhiko YAMAMOTO  

     
    INVITED PAPER

      Page(s):
    417-422

    Among various characters invented by the human being ,Kanji (Chinese character) is outstanding in its diversity and complexity, in contrast to Roman alphanumerics. So machine recognition of handwritten characters requires particular technics. This paper concerns commercially available optical character readers (OCRs) and the recognition techniques using therein. Methods of character recognition is classified the pattern types and the method of extracting features, to discuss the present states of the character recognition. In regard to the structural analysis, the proposed techniques are classified and discussed in accordance with the extraction method of line segment and axial correlation. Finally, various techniques are compared with one another by using a common database, so as to understand the present states of character recognition and to discuss the technical trends.

  • Continuous Nonlinearity in Character Recognition

    Hiromitsu YAMADA  

     
    INVITED PAPER

      Page(s):
    423-428

    Continuous nonlinearity" is stressed as a fundamental principle in pattern recognition including handprinted Kanji character recognition. Continuity" in template matching and spatial nonlinearity" in structural analysis should be unified toward deriving a higher level of recognition algorithm. At the same time, continuous nonlinearity in the temporal axis is important, as is the case of simultaneous processing of segmentation and recognition for touching characters. The above viewpoint is discussed in the following examples: nonlinear normalization, directional pattern matching, locally maximized similarity, relaxation matching, dynamic programming matching, segmentation of character string using dynamic programming, and exhaustive matching for character extraction on complex background.

  • Results of IPTP Character Recognition Competitions and Studies on Multi-expert System for Handprinted Numeral Recognition

    Toshio TSUTSUMIDA  Toshihiro MATSUI  Tadashi NOUMI  Toru WAKAHARA  

     
    PAPER-Comparative Study

      Page(s):
    429-435

    Through comparing the results of two successive IPTP Character Recognition Competitions which focused on 3-digit handprinted postal codes, we herein analyze the methodologies of the submitted algorithms along with the substituted or rejected patterns of these algorithms. Regarding their methodologies, lesser diversity was apparent specifically concerning the contour-chain code based on local stroke directions and statistical discriminant functions for feature extraction and discrimination. Analysis of the patterns demonstrated that the misrecognized patterns being most often improved were categorized as a decrease in peculiarly shaped handwritten characters or heavy-handed and disconnected strokes. However, most of the remaining misrecognitions were still classed as peculiarly shaped handwriting as commonly shared between the best three algorithms. From these analyses, we could delineate a direction to be taken for developing more effective methodologies and clarify the remaining problems to be overcome by the subsequent intensive research. Furthermore, we evaluate in this article our multi-expert recognition system for achieving higher recognition performances by means of combining complementary recognition algorithms. We performed a subsequent investigation of the Candidate Appearance Likelihood Method using novel experimental conditions and a new examination of the application of the neural network as the combining method for accumulating the broader candidate appearances. The results obtained confirm that combining through the neural network constitutes one of the most effective ways of making the multi-expert recognition system a reality.

  • Evaluation and Synthesis of Feature Vectors for Handwritten Numeral Recognition

    Fumitaka KIMURA  Shuji NISHIKAWA  Tetsushi WAKABAYASHI  Yasuji MIYAKE  Toshio TSUTSUMIDA  

     
    PAPER-Comparative Study

      Page(s):
    436-442

    This paper consists of two parts. The first part is devoted to comparative study on handwritten ZIP code numeral recognition using seventeen typical feature vectors and seven statistical classifiers. This part is the counterpart of the sister paper Handwritten Postal Code Recognition by Neural Network - A Comparative Study" in this special issue. In the second part, a procedure for feature synthesis from the original feature vectors is studied. In order to reduce the dimensionality of the synthesized feature vector, the effect of the dimension reduction on classification accuracy is examined. The best synthesized feature vector of size 400 achieves remarkably higher recognition accuracy than any of the original feature vectors in recognition experiment using a large number of numeral samples collected from real postal ZIP codes.

  • Handwritten Postal Code Recognition by Neural Network --A Comparative Study --

    Ahmad Fadzil ARIF  Hidekazu TAKAHASHI  Akira IWATA  Toshio TSUTSUMIDA  

     
    PAPER-Comparative Study

      Page(s):
    443-449

    This paper compares some popular character recognition techniques which have been proposed until today. 17 feature extraction methods and 4 neural network based recognition processes were used in handwritten numerals (postal codes) recognition. It was found that Weighted Direction Index Histogram, Peripheral Direction Contributivity Function and Expansion Cell feature extractions gave good results. As for the neural network recognition process, CombNET- and multi layer neural network showed good performances.

  • A Multi-Agent Based Method for Extracting Characters and Character Strings

    Keiji GYOHTEN  Tomoko SUMIYA  Noboru BABAGUCHI  Koh KAKUSHO  Tadahiro KITAHASHI  

     
    PAPER-Segmentation

      Page(s):
    450-455

    This paper describes COCE (COordinative Character Extractor), a method for extracting printed Japanese characters and their character strings from all sorts of document images. COCE is based on a multi-agent system where each agent tries to find a character string and extracts the characters in it. For the adaptability, the agents are allowed to look after arbitrary parts of documents and extract the characters using only the knowledge independent of the layouts. Moreover, the agents check and correct their results sometimes with the help of the other agents. From experimental results, we have verified the effectiveness of our approach.

  • Quantitative Evaluation of Improved Global Interpolation in the Segmentation of Handwritten Numbers Overlapping a Border

    Satoshi NAOI  Misako SUWA  Maki YABUKI  

     
    PAPER-Segmentation

      Page(s):
    456-463

    The global interpolation method we proposed can extract a handwritten alpha-numeric character pattern even if it overlaps a border. Our method interpolates blank segments in a character after borders are removed by evaluating segment pattern continuity and connectedness globally to produce characters with smooth edges. The main feature of this method is to evaluate global component label connectivity as pattern connectedness. However, it is impossible for the method to interpolate missing superpositioning loop segments, because they lack segment pattern continuity and they have already had global component label connectivity. To solve this problem, we improved the method by adding loop interpolation as a global evaluation. The evaluation of character segment continuity is also improved to achieve higher quality character patterns. There is no database of overlapping characters, so we also propose an evaluation method which generates various kinds of overlapping numerals from an ETL database. Experimental results using these generated patterns showed that the improved global interpolation method is very effective for numbers that overlap a border.

  • Cursive Handwritten Word Recognition Using Multiple Segmentation Determined by Contour Analysis

    Hirobumi YAMADA  Yasuaki NAKANO  

     
    PAPER-Word Recognition

      Page(s):
    464-470

    This paper proposes a method for cursive handwritten word recognition. Cursive word recognition generally consists of segmentation of a cursive word, character recognition and word recognition. Traditional approaches detect one candidate of segmentation point between characters, and cut the touching characters at the point [1]. But, it is difficult to detect a correct segmentation point between characters in cursive word, because form of touching characters varies greatly by cases. In this research, we determine multiple candidates as segmentation points between characters. Character recognition and word recognition decide which candidate is the most plausible touching point. As a result of the experiment, at the character recognition stage, recognition rate was 75.7%, while cumulative recognition rate within best three candidates was 93.7%. In word recognition, recognition rate was 79.8%, while cumulative recognition rate within best five candidates was 91.7% when lexicon size is 50. The processing speed is about 30 sec/word on SPARC station 5.

  • Robust n-Gram Model of Japanese Character and Its Application to Document Recognition

    Hiroki MORI  Hirotomo ASO  Shozo MAKINO  

     
    PAPER-Postprocessing

      Page(s):
    471-476

    A new postprocessing method using interpolated n-gram model for Japanese documents is proposed. The method has the advantages over conventional approaches in enabling high-speed, knowledge-free processing. In parameter estimation of an n-gram model for a large size of vocabulary, it is difficult to obtain sufficient training samples. To overcome poverty of samples, two smoothing methods for Japanese character trigram model are evaluated, and the superiority of deleted interpolation method is shown by using perplexity. A document recognition system based on the trigram model is constructed, which finds maximum likelihood solutions through Viterbi algorithm. Experimental results for three kinds of documents show that the performance is high when using deleted interpolation method for smoothing. 90% of OCR errors are corrected for the documents similar to training text data, and 75% of errors are corrected for the documents not so similar to training text data.

  • A Gray Zone Between Two Classes --Case of Smooth Curvature Change--

    Shunji MORI  Yu NAKAJIMA  Hirobumi NISHIDA  

     
    PAPER-Shape Models

      Page(s):
    477-484

    There are many instances in which character shape of a class changes smoothly to that of another class. Although there are many ways of the change, the most delicate change is curvature feature. The paper treat this problem systematically in both theoretically and experimentally. Specifically some confusing pairs were constructed which are well known in the field of OCR, such as 2 Z and 4 9. A series of samples generated using each model which change subtly were provided to conduct a psychological experiment. The results exhibit a monotone change of recognition rates from nearly 100% to 0% as the shape changes continuously. To imitate the humans' performance, feature of curvature was extracted based on continuous function representation based on Bezier's spline curve. Specifically two methods were tried from theoretical and engineering points of view and very successful results were obtained.

  • Pluralizing Method of Simple Similarity

    Yasuhiro AOKI  Taizo IIJIMA  

     
    PAPER-Classification Methods

      Page(s):
    485-490

    For similarity methods to work well, the image must be blurred before being input. However, the relationship between the blurring operation and the similarity is not fully understood. To solve the problem of this relationship, in this paper, the effect of blurring is investigated by expressing figure f(x) in the form of the sum of higher derivatives of f (x,σ), and then a simple similarity between figures was mathematically formulated in terms of the relation between visual patterns. By modifying this formulation, we propose pluralized simple similarity to increase the allowance in different view of multiple similarity. The similarity maintains higher allowance without any discernible loss of distinguishing power. We verify the effectiveness of the pluralized simple similarity throughout some experiments.

  • Recognition of Degraded Machine-Printed Characters Using a Complementary Similarity Measure and Error-Correction Learning

    Minako SAWAKI  Norihiro HAGITA  

     
    PAPER-Classification Methods

      Page(s):
    491-497

    Most conventional methods used in character recognition extract geometrical features, such as stroke direction and connectivity, and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs and stains, and by the graphical designs such as used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is perfectly accurate. This paper proposes a method for recognizing degraded characters as well as characters printed on graphical designs. This method extracts features from binary images, and a new similarity measure, the complementary similarity measure, is used as a discriminant function; it compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2, which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, and special characters. The results show that our method is much more robust against noise than the conventional geometrical-feature method. It also achieves high recognition rates of over 97% for characters with textured foregrounds, over 99% for characters with textured backgrounds, over 98% for outline fonts and over 99% for reverse contrast characters. The experiments for recognizing both the fontstyles and character category show that it also achieves high recognition rates against noise.

  • A Handwritten Character Recognition System by Efficient Combination of Multiple Classifiers

    Hideaki YAMAGATA  Hirobumi NISHIDA  Toshihiro SUZUKI  Michiyoshi TACHIKAWA  Yu NAKAJIMA  Gen SATO  

     
    PAPER-Classification Methods

      Page(s):
    498-503

    Handwritten character recognition has been increasing its importance and has been expanding its application areas such as office automation, postal service automation, automatic data entry to computers, etc. It is challenging to develop a handwritten character recognition system with high processing speed, high performance, and high portability, because there is a trade-off among them. In current technology, it is difficult to attain high performance and high processing speed at the same time with single algorithms, and therefore, we need to find an efficient way of combination of multiple algorithms. We present an engineering solution to this problem. The system is based on multi-stage strategy as a whole: The first stage is a simple, fast, and reliable recognition algorithm with low substitution-error rate, and data of high quality are recognized in this stage, whereas sloppily written or degraded data are rejected and sent out to the second stage. The second stage is composed of a sophisticated structural pattern classifier and a pattern matching classifier, and these two complementary algorithms run in parallel (multiple expert approach). We demonstrate the performance of the completed system by experiments using real data.

  • A Handprinted Character Recognition System Using Image Transformation Based on Partial Inclination Detection

    Masato SUZUKI  Nei KATO  Hirotomo ASO  Yoshiaki NEMOTO  

     
    PAPER-Handwritten Character Recognition

      Page(s):
    504-509

    In recognition of handprinted characters, it is important to dissolve distortions of character caused by writer's habits. In order to dissolve distortions and to obtain better features, many image conversion methods have been proposed. But there are distortions that cannot be dissolved by these methods. One example is the case of parallel strokes which are spread out in fan shape. In this paper, in order to dissolve distortions, we propose a new image conversion method, Transformation based on Partial Inclination Detection (TPID)", which is employed just before normalization, and is intended to dissolve several kinds of distortions in images of each character. TPID constructs transformation functions from inclination angles which are detected in some subspaces of the character's image, and converts images using the transformation functions. TPID is especially suitable for correcting the inclinations of horizontal and vertical strokes of a character. This has a powerful impact on the quality of the characteristic features. In recognition experiments using ETL9B, the largest database of handprinted characters in Japan, we have obtained a recognition rate of 99.08%, which is the best to our knowledge.

  • Precise Selection of Candidates for Handwritten Character Recognition Using Feature Regions

    Fang SUN  Shin'ichiro OMACHI  Hirotomo ASO  

     
    PAPER-Handwritten Character Recognition

      Page(s):
    510-515

    In this paper, a new algorithm for selection of candidates for handwritten character recognition is presented. Since we adopt the concept of the marginal radius to examine the confidence of candidates, the evaluation function is required to describe the pattern distribution correctly. For this reason, we propose Simplified Mahalanobis distance and observe its behavior by simulation. In the proposed algorithm, first, for each character, two types of feature regions (multi-dimensional one and one-dimensional one) are estimated from training samples statistically. Then, by referring to the feature regions, candidates are selected and verified. Using two types of feature regions is a principal characteristic of our method. If parameters are estimated accurately, the multi-dimensional feature region is extremely effective for character recognition. But generally, estimation errors in parameters occur, especially with a small number of sample patterns. Although the recognition ability of one-dimensional feature region is not so high, it can express the distribution comparatively precisely in one-dimensional space. By combining these feature regions, they will work concurrently to overcome the defects of each other. The effectiveness of the method is shown with the results of experiments.

  • High Accuracy Recognition of ETL9B Using Exclusive Learning Neural Network - (ELNET-)

    Kazuki SARUTA  Nei KATO  Masato ABE  Yoshiaki NEMOTO  

     
    PAPER-Neural Networks

      Page(s):
    516-522

    In earlier works we proposed the Exclusive Learning neural NET work (ELNET), which can be utilized to construct large scale recognition system for Chinese characters. However, this did not resolve the problem of how to use training samples effectively to generate more suitable recognition boundaries. In this paper, we propose ELNET- wherein an attempt has been made to deal with this problem. In comparison with ELNET, selection method of training samples is improved. And the number of module size are variable according to the number of training samples for each module. In recognition experiment for ETL9B (3036 categories) using ELNET-, we obtained a recognition rate of 95.84% as maximum recognition rate. This is the first time that such a high recognition rate has been obtained by neural networks.

  • Recognition of Devanagari Characters Using Neural Networks

    Kanad KEENI  Hiroshi SHIMODAIRA  Tetsuro NISHINO  Yasuo TAN  

     
    PAPER-Neural Networks

      Page(s):
    523-528

    Devanagari is the most widely used script in India. Here, a method is introduced for recognizing Devanagari characters using Neural network. The proposed method reduces the number of output unit necessary for a conventional neural network where the classification is based on a winner take all basis. An automatic coding procedure for representing the output layer of the network and a different method for the final classification is also proposed. Along with the automatic coding procedure, a heuristic method for representing the output units by exploiting the structural information of Devanagari character is also demonstrated. Besides, it has been shown by random representation of the output layer that the representation effects the generalization/performance of the network. The proposed automatic representation gave the recognition rate of 98.09% for 44 categories.

  • Stroke-Number and Stroke-Order Free On-Line Kanji Character Recognition as One-to-One Stroke Correspondence Problem

    Toru WAKAHARA  Akira SUZUKI  Naoki NAKAJIMA  Sueharu MIYAHARA  Kazumi ODAKA  

     
    PAPER-Online Recognition

      Page(s):
    529-534

    This paper describes an on-line Kanji character recognition method that solves the one-to-one stroke correspondence problem with both the stroke-number and stroke-order variations common in cursive Japanese handwriting. We propose two kinds of complementary algorithms: one dissolves excessive mapping and the other dissolves deficient mapping. Their joint use realizes stable optimal stroke correspondence without combinatorial explosion. Also, three kinds of inter-stroke distances are devised to deal with stroke concatenation or splitting and heavy shape distortion. These new ideas greatly improve the stroke matching ability of the selective stroke linkage method reported earlier by the authors. In experiments, only a single reference pattern for each of 2,980 Kanji character categories is generated by using training data composed of 120 patterns written carefully with the correct stroke-number and stroke-order. Recognition tests are made using the training data and two kinds of test data in the square style and in the cursive style written by 36 different people; recognition rates of 99.5%, 97.6%, and 94.1% are obtained, respectively. Moreover, comparative results obtained by the current OCR technique as applied to bitmap patterns of on-line character data are presented. Finally, future work for enhancing the stroke matching approach to cursive Kanji character recognition is discussed.

  • On-Line Signature Verification by Adaptively Weighted DP Matching

    Peng ZHAO  Atsusi HIGASHI  Yukio SATO  

     
    PAPER-Signature Verification

      Page(s):
    535-541

    This paper deals with on-line signature verification. A signature is obtained as a sequence of x, y-coordinates of pen-tip movement and writing pressure. The features of a signature are derived from the coordinates and the writing pressure and are decomposed into two principal features, shape and motion, using the DP-matching technique. We found that each point of a signature varies each time to some degree. However, the degrees of local variations subject to points, as some points are relatively stable and do not vary much while some of them are not. In this paper, we propose to incorporate weighted local variations based on the stability of each point so as to evaluate the difference of two signatures locally as well as globally. The dissimilarity measures are presented with respect to the corresponding features and are combined into one for efficient verification. In addition to the x, y-coordinates, the writing pressure is also considered to be part of shape. Experiments were carried out with a database which consists of 300 genuine signatures and 300 forgeries collected from 10 subjects. The effectiveness of incorporating the weighted local variation is shown by the experimental results. It contributes to an average increase in the correct verification rate as the correct verification rate increased 1.0% and was found to be 98.7%.

  • Table-Form Structure Analysis Based on Box-Driven Reasoning

    Osamu HORI  David S. DOERMANN  

     
    PAPER-Document Recognition and Analysis

      Page(s):
    542-547

    Table-form document structure analysis is an important problem in the document processing domain. This paper presents a new method called Box-Driven Reasoning (BDR) to robustly analyze the structure of table-form documents that include touching characters and broken lines. Real documents are copied repeatedly and overlaid with printed data, resulting in characters that touch cells and lines that are broken. Most previous methods employ a line-oriented approach, but touching characters and broken lines make the procedure fail at an early stage. BDR deals with regions directly in contrast with other previous methods and a reduced resolution image is introduced to supplement information deteriorated by noise. Experimental tests show that BDR reliably recognizes cells and strings in document images with touching characters and broken lines.

  • Note Symbol Extraction for Printed Piano Scores Using Neural Networks*

    Hidetoshi MIYAO  Yasuaki NAKANO  

     
    PAPER-Document Recognition and Analysis

      Page(s):
    548-554

    In the traditional note symbol extraction processes, extracted candidates of note elements were identified using complex if-then rules based on the note formation rules and they needed subtle adjustment of parameters through many experiments. The purpose of our system is to avoid the tedious tasks and to present an accurate and high-speed extraction of note heads, stems and flags according to the following procedure. (1) We extract head and flag candidates based on the stem positions. (2) To identify heads and flags from the candidates, we use a couple of three-layer neural networks. To make the networks learn, we give the position informations and reliability factors of candidates to the input units. (3) With the weights learned by the net, the head and flag candidates are recognized. As an experimental result, we obtained a high extraction rate of more than 99% for thirteen printed piano scores on A4 sheet which have various difficulties. Using a workstation (SPARC Station 10), it took about 90 seconds to do on the average. It means that our system can analyze piano scores 5 times or more as fast as the manual work. Therefore, our system can execute the task without the traditional tedious works, and can recognize them quickly and accurately.

  • A Recognition Method of Facility Drawings and Street Maps Utilizing the Facility Management Database

    Chikahito NAKAJIMA  Toshihiro YAZAWA  

     
    PAPER-Document Recognition and Analysis

      Page(s):
    555-560

    This paper proposes a new approach for inputting handwritten Distribution Facility Drawings (DFD) and their maps into a computer automatically by using the Facility Management Database (FMD). Our recognition method makes use of external information for drawing/map recognition. It identifies each electric-pole symbol and support cable symbol on drawings simply by consulting the FMD. Other symbols such as transformers and electric wires can be placed on drawings automatically. In this positioning of graphic symbols, we present an automatic adjustment method of a symbol's position on the latest digital maps. When a contradiction is unsolved due to an inconsistency between the content of the DFD and the FMD, the system requests a manual feedback from the operator. Furthermore, it uses the distribution network of the DFD to recognize the street lines on the maps which aren't computerized. This can drastically reduce the cost for computerizing drawings and maps.

  • Regular Section
  • Self-Tuning of Fuzzy Reasoning by the Steepest Descent Method and Its Application to a Parallel Parking

    Hitoshi MIYATA  Makoto OHKI  Masaaki OHKITA  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    561-569

    For a fuzzy control of manipulated variable so as to match a required output of a plant, tuning of fuzzy rules are necessary. For its purpose, various methods to tune their rules automatically have been proposed. In these method, some of them necessitate much time for its tuning, and the others are lacking in the generalization capability. In the fuzzy control by the steepest descent method, a use of piecewise linear membership functions (MSFs) has been proposed. In this algorithm, MSFs of the premise for each fuzzy rule are tuned having no relation to the other rules. Besides, only the MSFs corresponding to the given input and output data for the learning can be tuned efficiently. Comparing with the conventional triangular form and the Gaussian distribution of MSFs, an expansion of the expressiveness is indicated. As a result, for constructing the inference rules, the training cycles can be reduced in number and the generalization capability to express the behavior of a plant is expansible. An effectiveness of this algorithm is illustrated with an example of a parallel parking of an autonomous mobile robot.

  • A Comparison between the Computational Power of PARBS and RMBM

    Kensuke MIYASHITA  Yoshihiro TSUJINO  Nobuki TOKURA  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    570-578

    Processor networks connected by buses have attracted considerable attention. Since a reconfigurable array is more powerful than a PRAM and more practical, it becomes the focus of attention. The Processor Array with Reconfigurable Bus System (PARBS) and the Reconfigurable Multiple Bus Machine (RMBM) are both models of parallel computation based on reconfigurable bus and processor array. The PARBS is a processor array that consists of processors arranged to a 2-dimensional grid with a reconfigurable bus system. The RMBM is also made of processors and reconfigurable bus system, but the processors are located in a row and the number of processors and the number of buses are independent of each other. Four versions of RMBM has been proposed and Extended RMBM (E-RMBM) is regarded as the most powerful one among them. In this paper, we describe that a PARBS of size N M can be simulated in constant time by a E-RMBM of 4NM processors, M + 3 buses and 1 read buffer, and that a E-RMBM of P processors, B buses and D read buffers can be also simulated in constant time by a PARBS of size B P. A PARBS or RMBM that solves a computational problem of size n is polynomially bounded iff the product of the number of processors and buses and red and write ports is O (nc), for some constant c. When a PARBS is polynomially bounded, the E-RMBM which simulates it is also polynomially bounded, and vice versa.

  • On the Complexity of Finding Cycles in Periodic Functions Using the Quantum Turing Machine

    Takashi MIHARA  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    579-583

    There are some results indicating that a quantum computer seems to be more powerful than ordinary computers. In fact, P.W. Shor showed that a quantum computer can find discrete logarithms and factor integers in polynomial time with bounded error probability. No polynomial time algorithms to find them using ordinary computers are known. In this paper, we show that the cycles in some kinds of periodic functions, e.g., functions proposed as pseudo-random generators, can be found in polynomial time with bounded error probability on a quantum Turing machine. In general, it is known that ordinary computers take exponential time to find the cycles in periodic functions.

  • A Method for C2 Piecewise Quartic Polynomial Interpolation

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    584-590

    A new global method for constructing a C2 piecewise quartic polynomial curve is presented. The coefficient matrix of equations which must be solved to construct the curve is tridiagonal. The joining points of adjacent curve segments are the given data points. The constructed curve reproduces exactly a polynomial of degree four or less. The results of experiments to test the efficiency of the new method are also shown.

  • Visualization of Temporal and Spatial Information in Natural Language Descriptions

    Hiromi BABA  Tsukasa NOMA  Naoyuki OKADA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    591-599

    This paper discusses visualization of temporal and spatial information in natural language descriptions (NLDs), focusing on the translation process of intermediate representations of NLDs to proper scenarios" and environments" for animations. First, the intermediate representations are shown according to the idea of actors. Actors and non-actors are represented as primitives of objects, whereas actions as those of events. Temporal and spatial constraints by a given NLD text are imposed upon the primitives. Then, the representations containing unknown temporal or spatial parameters --time and coordinates-- are translated into evaluation functions, where the unlikelihood of the deviations from the predicted temporal or spatial relations are estimated. Particularly, the functions concerning actor's movements contain both temporal and spatial parameters. Next, the sum of all the evaluation functions is minimized by a nonlinear optimization method. Thus, the most proper actors' time-table, or scenario, and non-actors' location-table, or environment, for visualization are obtained. Implementation and experiments show that both temporal and spatial information in NLDs are well connected through actors' movements for visualization.

  • Eugenics-Based Genetic Algorithm

    Ju YE  Masahiro TANAKA  Tetsuzo TANINO  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    600-607

    The problem of genetic algorithm's efficiency has been attracting the attention of genetic algorithm community. Over the last decade, considerable researches have focused on improving genetic algorithm's performance. However, they are generally under the framework of natural evolutionary mechanism and the major genetic operators, crossover and mutation, are activated by the prior probabilities. An operator based on a prior probability possesses randomness, that is, the unexpected individuals are frequently operated, but the expected individuals are sometimes not operated. Moreover, as the evaluation function is the link between the genetic algorithm and the problem to be solved, the evaluation function provides the heuristic information for evolutionary search. Therefore, how to use this kind of heuristic information (present and past) is influential in the efficiency of evolutionary search. This paper, as an attempt, presents a eugenics-based genetic algorithm (EGA) -- a genetic algorithm that reflects the human's decision will (eugenics), and fully utilizes the heuristic information provided by the evaluation function for the decisions. In other words, EGA = evolutionary mechanisms + human's decision will + heuristic information. In EGA, the ideas of the positive eugenics and the negative eugenics are applied as the principle of selections and the selections are not activated by the prior probabilities but by the evaluation values of individuals. A method of genealogical chain-based selection for mutation is proposed, which avoids the blindness of stochastic mutation and the disruptive problem of mutation. A control strategy of reasonable competitions is proposed, which brings the effects of crossover and mutation into full play. Three examples, the minimum problem of a standard optimizing function--De Jong's test function F2, a typical combinatorial optimization problem--the traveling salesman problem, and a problem of identifying nonlinear system, are given to show the good performance of EGA.

  • Source Localization with Network Inversion Using an Answer-in-Weights Scheme

    Takehiko OGAWA  Keisuke KAMEYAMA  Roman KUC  Yukio KOSUGI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    608-619

    A new neural network for locating a source by integrating data from a number of sensors is considered. The network gives a solution for inverse problems using a back-propagation algorithm with the architecture to get the solution in the inter-layer weights in a coded form Three different physical quantities are applied to the network, since the scheme has three independent ports; an input port, a tutorial port and an answer port. Our architecture is useful to estimate z" in the problem whose structure is y=f(x,z) where y is the observed data, x is the sensor position and z is the source location. The network integrates the information obtained from a number of sensors and estimates the location of the source. We apply the network to two problems of location estimation: the localization of the active nerves from their evoked potential waveforms and the localization of objects from their echoes using an active sonar system.

  • One Approach for Image Edge Sharpening

    Hiroshi KONDO  Suharno AGUS  Mariko MOROKUMA  

     
    LETTER-Image Processing,Computer Graphics and Pattern Recognition

      Page(s):
    620-624

    One approach for image edge sharpening which includes a phase-only synthesis (POS) is presented. The technique presented here is the generalized version of the traditional Laplacian image enhancement. The utilizing of an internally dividing point between the POS and the Laplacian enhancement image makes this approach more flexible, more effective.