The search functionality is under construction.

Author Search Result

[Author] Eiichi TANAKA(15hit)

1-15hit
  • A Two Dimensional Context-Dependent Similarity Measure

    Eiichi TANAKA  

     
    PAPER-Picture Processing

      Vol:
    E68-E No:10
      Page(s):
    667-673

    A picture as a set of pixels is essentially context-dependent. This paper describes the picture matching problem based on the context-dependent edit operations. The edit operations investigated allow substituting pixels of a picture into another pixies, deleting pixels from a picture, and inserting pixels into a picture whose costs determined depending on their contexts. The computational complexity to determine the similarity between pictures with NN pixels is O(N4). An approximate computing method, two separation theorems, isomorphism and molluscoidal isomorphism between pictures are described. This measure will applied to the problem of pattern recognition.

  • Efficient Algorithms for Finding Largest Similar Substructures in Unordered Trees

    Shaoming LIU  Eiichi TANAKA  

     
    PAPER

      Vol:
    E79-A No:4
      Page(s):
    428-440

    This paper discusses the problems of largest similar substructures (in short, LSS) in rooted and unordered trees (in short, R-trees) and those in unrooted and unordered trees (in short, trees). For two R-trees (or trees) Ta and Tb, LSS in Tb to Ta is defined, and two algorithms for finding one of the LSSs for R-trees and that for trees are proposed. The time and space complexities of both algorithms are OT (m3NaNb) and OS(mNaNb), respectively, where m is the largest degree of a vertex of Ta and Tb, and Na(Nb)is the number of vertices of Ta(Tb).

  • Metrics between Trees Embedded in a Plane and Their Computing Methods

    Eiichi TANAKA  

     
    PAPER

      Vol:
    E79-A No:4
      Page(s):
    441-447

    A tree embedded in a plane can be characterized as an unrooted and cyclically ordered tree (CO-tree). This paper describes new definitions of three distances between CO-trees and their computing methods. The proposed distances are based on the Tai Mapping, the structure preserving mapping and the strongly structure preserving mapping, respectively, and are called the Tai distance (TD), the structure preserving distance (SPD) and the strongly structure preserving distance (SSPD), respectively. The definitions of distances and their computing methods are simpler than those of the old definitions and computing methods, respectively. TD and SPD by the new definitions are more sensitive than those by the old ones, and SSPDs by both definitions are equivalent. The time complexities of computing TD, SPD and SSPD between CO-trees Ta and Tb are OT (N2aN2a), OT(maNaN2b) and OT(mambNaNb), respectively, where Na(Nb) and ma(mb) are the number of vertices in tree Ta(Tb)and the maximum degree of a vertex in Ta(Tb), respectively. The space complexities of these methods are OS(NaNb).

  • An Error-Correcting Version of the Leiss's Parser for Context-Free Languages

    Ken-ichi KURODA  Eiichi TANAKA  

     
    LETTER-Automaton, Language and Theory of Computing

      Vol:
    E76-D No:12
      Page(s):
    1528-1531

    This paper describes an error-correcting parser (ec-parser) for context-free languages that is an extension of the Leiss's parser. Since the ec-parser uses precomputed informations and a pruning technique by lookahead, the ec-parser is always faster than the Lyon's parser. Several examples are shown.

  • Similar Key Search Files Based on Hashing

    Sheng-ta YANG  Eiichi TANAKA  

     
    LETTER-Databases

      Vol:
    E80-D No:1
      Page(s):
    101-105

    The storage utilizations of existing similar key search files based on B+-tree and extensible hashing were under 70% and should be improved. A similar key search file based on extensible hashing with partial expansion and that on linear hashing with partial expansion are proposed. Computer simulations on about 230 thousand English words show that the storage utilizations of the files with 32 expansive steps are about 97%.

  • Response of a Superconducting Transition-Edge Sensor Microcalorimeter with a Mushroom-shaped Absorber to L X-rays Emitted by Transuranium Elements Open Access

    Keisuke MAEHATA  Makoto MAEDA  Naoko IYOMOTO  Kenji ISHIBASHI  Keisuke NAKAMURA  Katsunori AOKI  Koji TAKASAKI  Kazuhisa MITSUDA  Keiichi TANAKA  

     
    INVITED PAPER

      Vol:
    E98-C No:3
      Page(s):
    178-185

    A four-pixel-array superconducting transition-edge sensor (TES) microcalorimeter with a mushroom-shaped absorber was fabricated for the energy dispersive spectroscopy performed on a transmission electron microscope. The TES consists of a bilayer of Au/Ti with either a 50-nm or 120-nm thickness. The absorber of 5.0,$mu$m thick is made from a Au layer and its stem is deposited in the center of the TES surface. A Ta$_{2}$O$_{5}$ insulating layer of 100-nm thickness is inserted between the overhang region of the absorber and the TES surface. A selected pixel of the TES microcalorimeter was operated for the detection of Np L X-rays emitted from an $^{241}$Am source. A response of the TES microcalorimeter to L X-rays was obtained by analyzing detection signal pulses with using the optimal filter method. An energy resolution was obtained to be 33,eV of the full width at half maximum value at 17.751,keV of Np L$_{eta 1}$ considering its natural width of 13.4,eV. Response to L X-rays emitted from a mixture source of $^{238}$Pu, $^{239}$Pu and $^{241}$Am was obtained by operating the selected pixel of the TES microcalorimeter. Major L X-ray peaks of progeny elements of $alpha$ decay of Pu and Am isotopes were clearly identified in the obtained energy spectrum. The experimental results demonstrated the separation of $^{241}$Am and plutonium isotopes by L X-ray spectroscopy.

  • A Computing Algorithm for the Tree Metric Based on the Structure Preserving Mapping

    Eiichi TANAKA  

     
    PAPER-Automata and Languages

      Vol:
    E68-E No:5
      Page(s):
    317-324

    This paper describes a computing algorithm for the tree distance based on the structure preserving mapping. The distance is defined as the minimum sum of the weights of edit operations needed to transform tree Tα to tree Tβ under the restriction of the structure preserving mapping. The edit operations allow substituting a vertex of a tree to another, deleting a vertex of a tree and inserting a vertex to a tree. The proposed algorithm computes the distance between Tα and Tβ in time OT( mβ Nα Nβ) and in space OS( Nα Nβ), where Nα, Nβ, mα and mβ are the number of vertices of Tα, that of Tβ, the maximum number of children of a vertex in Tα and that in Tβ, respectively. Possible applications are to pattern recognition, syntactic error detection and correction for natural and artificial languages, and information retrieval for tree-structure data.

  • A Parse Algorithm for a Context-Sensitive Language

    Eiichi TANAKA  King SUNG FU  

     
    LETTER-Automata and Languages

      Vol:
    E60-E No:4
      Page(s):
    183-184

    This paper descrives a deterministic parse algorithm for a context-sensitive language. The algorithm is an extension of the Cocke-Kasami-Younger's parse algorithm for a contest-free language to an algorithm for a context-sensitive language. The execution labor by the algorithm is more than 0(n3) for an input with length n.

  • An Improved Error-Correcting Parser for a Context-Free Language

    Eiichi TANAKA  

     
    PAPER-Automata and Languages

      Vol:
    E67-E No:7
      Page(s):
    379-385

    This paper describes an error-correcting parser for a context-free language based on the Graham-Harrison-Ruzzo's context-free recognizer (GHR). The parser has similar characteristics as GHR:(1) The parser unifies both the top-down error-correcting parser by Lyon and the bottom-up error-correcting parsers by Tanaka-Fu and Yamasaki-Tonomura.(2) The parser is conceptually simpler than the Lyon's parser, and may be much faster than the bottom-up error-correcting parsers.

  • Two Dimensional Largest Common Subpatterns between Pictures

    Eiichi TANAKA  Sumio MASUDA  

     
    LETTER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:9
      Page(s):
    1358-1361

    Several two-dimensional largest common subpatterns (LCP) between pictures are defined and their computing methods are proposed. The time and space complexities of the computing methods are O(IJMN) to obtain the size of LCPs between a picture with IJ pixels and a picture with MN pixels. These LCPs can be used as similarity measures between pictures and can be applied to texture recognition and classification.

  • Transition Edge Sensor-Energy Dispersive Spectrometer (TES-EDS) and Its Applications Open Access

    Keiichi TANAKA  Akikazu ODAWARA  Atsushi NAGATA  Yukari BABA  Satoshi NAKAYAMA  Shigenori AIDA  Toshimitsu MOROOKA  Yoshikazu HOMMA  Izumi NAKAI  Kazuo CHINONE  

     
    INVITED PAPER

      Vol:
    E92-C No:3
      Page(s):
    334-340

    The Transition Edge Sensor (TES)-Energy Dispersive Spectrometer (EDS) is an X-ray detector with high-energy resolution (12.8 eV). The TES can be mounted to a scanning electron microscope (SEM). The TES-EDS is based on a cryogen-free dilution refrigerator. The high-energy resolution enables analysis of the distribution of various elements in samples under low acceleration voltage (typically under 5 keV) by using K-lines of light elements and M lines of heavy elements. For example, the energy of the arsenic L line differs from the magnesium K line by 28 eV. When used to analyze the spore of the Pteris vittata L plant, the TES-EDS clearly reveals a different distribution of As and Mg in the micro region of the plant. The TES-EDS with SEM yields detailed information about the distribution of multi-elements in a sample.

  • A Similar Key Search File Based on Extendible Hashing

    Shinji KAWADE  Eiichi TANAKA  

     
    LETTER-Databases

      Vol:
    E78-D No:9
      Page(s):
    1218-1220

    The organization of the proposed file and algorithms for search and maintenance of the file are simpler than those of a similar key search file based on B+-tree. An experiment using 230,188 keys with length 1-16 shows the good performance of the file. The storage utilization of the file is about 68%.

  • The Largest Common Similar Substructure Problem

    Shaoming LIU  Eiichi TANAKA  

     
    PAPER

      Vol:
    E80-A No:4
      Page(s):
    643-650

    This paper discusses the largest common similar substructure (in short, LCSS) problem for trees. The problem is, for all pairs of "substructure of A and that of B," to find one of them, denoted by A and B', such that A is most similar to B' and the sum of the number of vertices of A and that of B' is largest. An algorithm for the LCSS problem for unrooted and unordered trees (in short, trees) and that for trees embedded in a plane (in short, Co-trees) are proposed. The time complexity of the algorithm for trees is O (max (ma, mb)2 NaNb) and that for CO-trees is O (mambNaNb), where, ma (mb) and Na (Nb) are the largest degree of a vertex of tree Ta (Tb) and the number of vertices of Ta (Tb), respectively. It is easy to modify the algorithms for enumerating all of the LCSSs for trees and CO-trees. The algorithms can be applied to structure-activity studies in chemistry and various structure comparison problems.

  • A Metric between Unrooted and Unordered Trees and Its Top-down Computing Method

    Tomokazu MUGURUMA  Eiichi TANAKA  Sumio MASUDA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:5
      Page(s):
    555-566

    Many metrics between trees have been proposed. However, there is no research on a graph metric that can be applied to molecular graphs. And most of the reports on tree metrics have dealt with rooted and ordered trees. As the first step defining a graph metric for molecular graphs, this paper proposes a tree metric between unrooted and unordered trees. This metric is based on a mapping between trees that determines a transformation from one tree to another. The metric is the minimum weight among the weights of all possible transformations. The characteristics of the mapping are investigated. A top-down computing method is proposed using the characteristics of the mapping. The time and space complexities are OT(N 2aN 2b(N 3aN 3b)) and Os(N 2aN 2b), respectively, where Na and Nb are the numbers of vertices of the two trees. If the degrees of all vertices of the trees are bounded by a constant, the time complexity of the method is O (N 3aN 3b). The computing time to obtain the distance between a pair of molecular graphs using a computer (SUN SparcStation ELC) is 0.51 seconds on average for all the pairs of 111 molecular graphs that have 12.0 atoms on average. This methic can be applied to the clustering of molecular graphs.

  • The Distances between Unrooted and Cyclically Ordered Trees and Their Computing Methods

    Shaoming LIU  Eiichi TANAKA  Sumio MASUDA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:10
      Page(s):
    1094-1105

    Several distances between trees have been proposed. However, most of the reports on distances have dealt with rooted and ordered trees. This paper proposes two distances between unrooted and cyclically ordered trees (CO-trees) and their computing methods. A CO-tree is a tree embedded in a plane. These distances are defined based on Tai's mapping (TM) and a strongly structure preserving mapping (SSPM) between CO-trees. The time complexities to compute the distances between two CO-trees Ta and Tb are OT (N 2aN 2b) for the distance based on a TM and OT(mambNaNb) for that on an SSPM, respectively, where ma(mb) and Na(Nb) are the largest degree of a vertex and the number of vertices of Ta(Tb), respectively. The space complexities of both methods are Os(NaNb). Those distances can be applied to the clustering of CO-trees.