The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] RP(993hit)

901-920hit(993hit)

  • Algorithm Transformation for Cube-Type Networks

    Masaru TAKESUE  

     
    PAPER-Algorithms

      Vol:
    E79-D No:8
      Page(s):
    1031-1037

    This paper presents a method for mechanically transforming a parallel algorithm on an original network so that the algorithm can work on a target network. It is assumed that the networks are of cube-type such as the shuffle-exchange network, omega network, and hypercube. Were those networks isomorphic to each other, the algorithm transformation is an easy task. The proposed transformation method is based on a novel graphembedding scheme <φ: δ, κ, π, ψ>. In addition to the dilating operation δ of the usual embedding scheme <φ: δ>, the novel scheme uses three primitive graph-transformation operations; κ (= δ-1) for contracting a path into a node, π for pipelining a graph, and ψ (= π-1) for folding a pipelined graph. By applying the primitive operations, the cube-type networks can be transformed so as to be isomorphic to each other. Relationships between the networks are represented by the composition of applied operations. With the isomorphic mapping φ, an algorithm in a node of the original network can be simulated in the corresponding node(s) of the target network. Thus the algorithm transformation is reduced to routine work.

  • The Optimum Approximate Restoration of Multi-Dimensional Signals Using the Prescribed Analysis or Synthesis Filter Bank

    Takuro KIDA  Yi ZHOU  

     
    PAPER-Digital Signal Processing

      Vol:
    E79-A No:6
      Page(s):
    845-863

    We present a systematic theory for the optimum sub-band interpolation using a given analysis or synthesis filter bank with the prescribed coefficient bit length. Recently, similar treatment is presented by Kida and quantization for decimated sample values is contained partly in this discussion [13]. However, in his previous treatment, measures of error are defined abstractly and no discussion for concrete functional forms of measures of error is provided. Further, in the previous discussion, quantization is neglected in the proof of the reciprocal theorem. In this paper, linear quantization for decimated sample values is included also and, under some conditions, we will present concrete functional forms of worst case measures of error or a pair of upper bound and lower limit of those measures of error in the variable domain. These measures of error are defined in Rn, although the measure of error in the literature [13] is more general but must be defined in each limited block separately. Based on a concrete expression of measure of error, we will present similar reciprocal theorem for a filter bank nevertheless the quantization for the decimated sample values is contained in the discussion. Examples are given for QMF banks and cosine-modulated FIR filter banks. It will be shown that favorable linear phase FIR filter banks are easily realized from cosine-modulated FIR filter banks by using reciprocal relation and new transformation called cosine-sine modulation in the design of filter banks.

  • A Method for C2 Piecewise Quartic Polynomial Interpolation

    Caiming ZHANG  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:5
      Page(s):
    584-590

    A new global method for constructing a C2 piecewise quartic polynomial curve is presented. The coefficient matrix of equations which must be solved to construct the curve is tridiagonal. The joining points of adjacent curve segments are the given data points. The constructed curve reproduces exactly a polynomial of degree four or less. The results of experiments to test the efficiency of the new method are also shown.

  • Robust n-Gram Model of Japanese Character and Its Application to Document Recognition

    Hiroki MORI  Hirotomo ASO  Shozo MAKINO  

     
    PAPER-Postprocessing

      Vol:
    E79-D No:5
      Page(s):
    471-476

    A new postprocessing method using interpolated n-gram model for Japanese documents is proposed. The method has the advantages over conventional approaches in enabling high-speed, knowledge-free processing. In parameter estimation of an n-gram model for a large size of vocabulary, it is difficult to obtain sufficient training samples. To overcome poverty of samples, two smoothing methods for Japanese character trigram model are evaluated, and the superiority of deleted interpolation method is shown by using perplexity. A document recognition system based on the trigram model is constructed, which finds maximum likelihood solutions through Viterbi algorithm. Experimental results for three kinds of documents show that the performance is high when using deleted interpolation method for smoothing. 90% of OCR errors are corrected for the documents similar to training text data, and 75% of errors are corrected for the documents not so similar to training text data.

  • Quantitative Evaluation of Improved Global Interpolation in the Segmentation of Handwritten Numbers Overlapping a Border

    Satoshi NAOI  Misako SUWA  Maki YABUKI  

     
    PAPER-Segmentation

      Vol:
    E79-D No:5
      Page(s):
    456-463

    The global interpolation method we proposed can extract a handwritten alpha-numeric character pattern even if it overlaps a border. Our method interpolates blank segments in a character after borders are removed by evaluating segment pattern continuity and connectedness globally to produce characters with smooth edges. The main feature of this method is to evaluate global component label connectivity as pattern connectedness. However, it is impossible for the method to interpolate missing superpositioning loop segments, because they lack segment pattern continuity and they have already had global component label connectivity. To solve this problem, we improved the method by adding loop interpolation as a global evaluation. The evaluation of character segment continuity is also improved to achieve higher quality character patterns. There is no database of overlapping characters, so we also propose an evaluation method which generates various kinds of overlapping numerals from an ETL database. Experimental results using these generated patterns showed that the improved global interpolation method is very effective for numbers that overlap a border.

  • Succeeding Word Prediction for Speech Recognition Based on Stochastic Language Model

    Min ZHOU  Seiichi NAKAGAWA  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E79-D No:4
      Page(s):
    333-342

    For the purpose of automatic speech recognition, language models (LMs) are used to predict possible succeeding words for a given partial word sequence and thereby to reduce the search space. In this paper several kinds of stochastic language models (SLMs) are evaluated-bigram, trigram, hidden Markov model (HMM), bigram-HMM, stochastic context-free grammar (SCFG) and hand-written Bunsetsu Grammar. To compare the predictive power of these SLMs, the evaluation was conducted from two points of views: (1) relationship between the number of model parameters and entropy, (2) predictive rate of succeeding part of speech (POS) and succeeding word. We propose a new type of bigram-HMM and compare it with the other models. Two kinds of approximations are tried and examined through experiments. Results based on both of English Brown-Corpus and Japanese ATR dialog database showed that the extended bigram-HMM had better performance than the others and was more suitable to be a language model.

  • An Abstraction of Fixpoint Semantics for Normal Logic Programs

    Susumu YAMASAKI  

     
    PAPER-Software Theory

      Vol:
    E79-D No:3
      Page(s):
    196-208

    We deal with a fixpoint semantics for normal logic programs by means of an algebraic manipulation of idempotent substitution sets. Because of the negation, the function associated with a given normal logic program, which captures the deductions caused by the program, is in general nonmonotonic, as long as we are concerned with 2-valued logic approach. The demerit of the nonmonotonic function is not to guarantee its fixpoint well, although the fixpoint is regarded as representing the whole behaviour. The stable model as in [6] is fixpoint of nonmonotonic functions, but it is referred to on the assumption of its existences. On the other hand, if we take 3-valued logic approach for normal logic programs as in [5], [9], [11], [14] we have the monotonic function to represent resolutions and negation as failure, and define its fixpoint well, if we permit the fixpoint not to be constructive because of discontinuity. Since the substituitions for variables in the program are essentially significant in the deductions for logic programming, we next focus on the representations by means of substitutions for the deductions, without usual expressions based on atomic formulas. We examine the semantics in terms of abstract interpretations among semantics as surveyed in [9], where an abstraction stands for the capability of representing another semantics. In this paper, in 3-valued logic approach and by means of the substitution manipulation, the semantics is defined to be an abstraction of the semantics in [5], [9]. To construct a semantics based on the idempotent substitution set, the algebraic manipulation of substitutions is significant, whereas the treatment in [10] for the case of definite clause sets is not available because of the restriction of substitutions to some variable domain as most general unifications.

  • Recursive Construction of the Systems Interpolating 1st- and 2nd-Order Information

    Kazumi HORIGUCHI  

     
    LETTER-Systems and Control

      Vol:
    E79-A No:1
      Page(s):
    134-137

    We present a recursive algorithm for constructing linear discrete-time systems which interpolate the desired 1st-and 2nd-order information. The recursive algorithm constructs a new system and connects it to the previous system in the cascade form every time new information is added. These procedures yield a practical realization of all the interpolants.

  • On the One-Way Algebraic Homomorphism

    Eikoh CHIDA  Takao NISHIZEKI  Motoji OHMORI  Hiroki SHIZUYA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    54-60

    In this paper we discuss the relation between a one-way group homomorphism and a one-way ring homomorphism. Let U,V be finite abelian groups with #U=n. We show that if there exists a one-way group homomorphism f:UV, then there exists a one-way ring homomorphism F:ZnUZnImf. We also give examples of such ring homomorphisms which are one-way under a standard cryptographic assumption. This implies that there is an affirmative solution to an extended version of the open question raised by Feigenbaum and Merrit: Is there an encryption function f such that both f(x+y) and f(xy) can be efficiently computed from f(x) and f(y)? A multiple signature scheme is also given as an application of one-way ring homomorphisms.

  • SAR Distributions in a Human Model Exposed to Electromagnetic Near Field by a Short Electric Dipole

    So-ichi WATANABE  Masao TAKI  

     
    PAPER-Electromagnetic Compatibility

      Vol:
    E79-B No:1
      Page(s):
    77-84

    The SAR distributions over a homogeneous human model exposed to a near field of a short electric dipole in the resonant frequency region were calculated with the spatial resolution of 1cm3 which approximated 1g tissue by using the FDTD method with the expansion technique. The dependences of the SAR distribution on the distance between the model and the source and on frequency were investigated. It was shown that the large local SAR appeared in the parts of the body nearest to the source when the source was located at 20cm from the body, whereas the local SAR were largest in the narrow sections such as the neck and legs when the source was farther than 80cm from the model. It was also shown that, for the near-field exposure in the resonant frequency region, the profile of the layer averaged SAR distribution along the main axis of the body of the human model depended little on frequency, and that the SAR distribution in the section perpendicular to the main axis of the human body depended on frequency. The maximum local SAR per gram tissue over the whole body model was also determined, showing that the ratios of the maximum local SAR to the whole-body averaged SAR for the near-field exposure were at most several times as large as the corresponding ratio for the far-field exposure, when the small source located farther than 20cm from the surface of the human model.

  • Data Reduction Method for the Laser Long-Path Absorption Measurement of Atmospheric Trace Species Using the Retroreflector in Space

    Nobuo SUGIMOTO  Atsushi MINATO  

     
    PAPER

      Vol:
    E78-B No:12
      Page(s):
    1585-1590

    Data reduction method for the earth-satellite-earth laser long-path absorption measurements of atmospheric trace species using the Retroreflector in Space (RIS) on the Advanced Earth Observing Satellite (ADEOS) is described. In the RIS experiment, atmospheric absorption will be measured with single-longitudinal-mode pulsed CO2 lasers and their second and third harmonics. High-resolution absorption spectra are measured by using the Doppler shift of the return beam which is caused by the satellite movement. Vertical profiles of O3 and CH4 are retrieved from the measured absorption line shapes with the inversion method. Also, column contents of CFC12, HNO3, CO2, CO, N2O are derived by the least squares method with assumptions on the relative vertical profiles. Errors in the measurement were evaluated by computer simulation.

  • Vision System for Depalletizing Robot Using Genetic Labeling

    Manabu HASHIMOTO  Kazuhiko SUMI  Shin'ichi KURODA  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1552-1558

    In this paper, we present a vision system for a depalletizing robot which recognizes carton objects. The algorithm consists of the extraction of object candidates and a labeling process to determine whether or not they actually exist. We consider this labeling a combinatorial optimization of labels, we propose a new labeling method applying Genetic Algorithm (GA). GA is an effective optimization method, but it has been inapplicable to real industrial systems because of its processing time and difficulty of finding the global optimum solution. We have solved these problems by using the following guidelines for designing GA: (1) encoding high-level information to chromosomes, such as the existence of object candidates; (2) proposing effective coding method and genetic operations based on the building block hypothesis; and (3) preparing a support procedure in the vision system for compensating for the mis-recognition caused by the pseudo optimum solution in labeling. Here, the hypothesis says that a better solution can be generated by combining parts of good solutions. In our problem, it is expected that a global desirable image interpretation can be obtained by combining subimages interpreted consistently. Through real image experiments, we have proven that the reliability of the vision system we have proposed is more than 98% and the recognition speed is 5 seconds/image, which is practical enough for the real-time robot task.

  • Extraction of Three-Dimensional Multiple Skeletons and Digital Medial Skeleton

    Masato MASUYA  Junta DOI  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1567-1572

    We thought that multiple skeletons were inherent in an ordinary three-dimensional object. A thinning method is developed to extract multiple skeletons using 333 templates for boundary deletion based on the hit or miss transformation and 222 templates for checking one voxel thickness. We prepared twelve sets of deleting templates consisting of total 194 templates and 72 one voxel checking templates. One repetitive iteration using one sequential use of the template sets extracts one skeleton. Some of the skeletons thus obtained are identical; however, multiple independent skeletons are extracted by this method. These skeletons fulfill the well-recognized three conditions for a skeleton. We extracted three skeletons from the cube, two from the space shuttle model and four from the L-shaped figure by Tsao and Fu. The digital medial skeleton, which is not otherwise extracted, is extracted by comparing the multiple skeletons with the digital medial-axis-like-figure. One of our skeletons for the cude agreed with the ideal medial axis. The locations of the gravity center of the multiple skeletons are compared with that of the original shape to evaluate how uniform or non-biased skeletons are extracted. For the L-shaped figure, one of our skeletons is found to be most desirable from the medial and uniform points of view.

  • Trial for Deep Submicron Track Width Recording

    Hiroaki MURAOKA  Yoshihisa NAKAMURA  

     
    PAPER

      Vol:
    E78-C No:11
      Page(s):
    1517-1522

    Extremely narrow track width of deep submicron range is examined in perpendicular magnetic recording. Head field distribution of a single-pole head analyzed by 3-dimensional computer simulation results in a sharp gradient, but relatively large cross-sectional area is required to maintain head field strength. Based on this design concept, a lateral single-pole head is described and proved to attain track width of 0.4 µm. In addition, multilevel partial response appropriate to the new multitrack recording system is proposed.

  • Extremely High-Density Magnetic Information Storage--Outlook Based on Analyses of Magnetic Recording Mechanisms--

    Yoshihisa NAKAMURA  

     
    INVITED PAPER

      Vol:
    E78-C No:11
      Page(s):
    1477-1492

    Tremendous progress has been made in magnetic data storage by applying theoretical considerations to technologies accumulated empirically through a great deal of research and development. In Japan, the recording demagnetization phenomenon was eagerly analyzed by many researchers because it was a serious problem in analogue signal recording such as video tape recording using a relatively thick magnetic recording medium. Consequently, perpendicular magnetic recording was proposed as a method for extremely high-bit-density recording. This paper describes the theoretical background which has resulted in the idea of perpendicular magnetic recording. Furthermore, the possibility of magnetic recording is discussed on the basis of the results obtained theoretically by magnetic recording simulators. Magnetic storage has the potential for extremely high-bit-density recording exceeding 1 Tb/cm2. We propose the idea of 'spinic data storage' in which binary digital data could be stored into each ferromagnetic single-domain columnar particle when the perpendicular magnetizing method is used.

  • Control of Soft Magnetism of Co-Zr and Co-Zr-Ta Films for Backlayers in Perpendicular Magnetic Recording Media

    Shigeki NAKAGAWA  Masahiko NAOE  

     
    PAPER

      Vol:
    E78-C No:11
      Page(s):
    1557-1561

    Co-Zr and Co-Zr-Ta amorphous films were prepared by the Kr sputtering method for use as the backlayers of Co-Cr perpendicular magnetic recording tape media. The effect of the addition of Ta to Co-Zr thin films was also investigated. Lower substrate temperature was required to prepare amorphous Co-Zr films with excellent soft magnetic properties. The relationships among Ta content X, magnetostriction constant λ and magnetic characteristics such as coercivity Hc and relative permeability µr were clarified. A method of evaluating λ of soft magnetic thin films deposited on polymer sheet substrate has been presented. Films with composition of (Co95.7Zr4.3) 100-X TaX at X of 10 at.% possessed sufficiency soft magnetic properties such as low Hc below 80 A/m and high µr above 600. Addition of Ta was effective in changing change the sign of λ from positive to negative. It was found that the negative magnetoelastic energy and the smaller λ caused the soft magnetism.

  • Effects of In-Plane Hard Magnetic Layer on Demagnetization and Media Noise in Triple-Layered Perpendicular Recording Media

    Toshio ANDO  Makoto MIZUKAMI  Toshikazu NISHIHARA  

     
    PAPER

      Vol:
    E78-C No:11
      Page(s):
    1543-1549

    The authors have studied the demagnetization phenomenon which is observed in a conventional CoCrTa/CoZrNb double-layered (DL) perpendicular recording medium. The authors have also investigated the effects of an in-plane hard magnetic layer in a triple-layered (TL) perpendicular recording medium. The in-plane hard magnetic underlayer is made of CoSm or CoCrTa/Cr and is laid under the CoZrNb soft magnetic layer. In the DL medium, a demagnetization phenomenon i.e. decrease of the readback signal, is observed when the CoCrTa layer has a strong perpendicular orientation and the CoZrNb underlayer has a low coercivity. The amount of the signal decrease depends strongly on the accumulated disk revolutions. This demagnetization is considered to be caused by fact that the recorded magnetization in the CoCrTa layer is reduced by the magnetic field generated from the domain walls in the CoZrNb layer, since the CoZrNb layer is very sensitive to a magnetic environment such as geo-magnetism and domain walls move as the disk rotates. On the other hand in the TL medium, the hard magnetic layer has an effect of pinning the magnetic domain in the CoZrNb layer, by which the demagnetization problem is successfully prevented. The hard magnetic layer remarkably reduces the domain walls in the CoZrNb layer and contributes to medium noise reduction. Thus the TL medium presents a higher SN ratio than DL medium.

  • Point Magnetic Recording Using a Force Microscope Tip on Co-Cr Perpendicular Media with Compositionally Separated Microstructures

    Toshifumi OHKUBO  Yasushi MAEDA  Yasuhiro KOSHIMOTO  

     
    PAPER

      Vol:
    E78-C No:11
      Page(s):
    1523-1529

    A soft magnetic force microscope (MFM) tip was used to evaluate the magnetic recording characteristics of compositionally separated Co-Cr perpendicular media. Small magnetic bits were recorded on thick (350 nm). and thin (100 nm) films, focusing on the fineness of compositionally separated microstructures. MFM images showed bit marks 230 and 150 nm in diameter, measured at full-width at half maximum (FWHM) for the thick and thin films, respectively. These results verify that the recordable bit size can be decreased by using a thinner film with a finer compositionally separated microstructure. Simulation was used to clarify the relationship between the actual sizes of the recorded bits and the sizes of their MFM images. The recorded bit size was found to closely correspond to the FWHM of the MFM bit images.

  • Microwave Power Absorption in a Cylindrical Model of Man in the Presence of a Flat Reflector

    Shuzo KUWANO  Kinchi KOKUBUN  

     
    LETTER-Electromagnetic Compatibility

      Vol:
    E78-B No:11
      Page(s):
    1548-1550

    This letter describes the power absorption of a cylindrical man model placed near a flat reflector exposed to TE microwave. The numerical results show that the absorption is in some cases an order of magnitude or more greater than that of the man model without a reflector.

  • Synthesizing Efficient VLSI Array Processors from Iterative Algorithms by Excluding Pseudo-Dependences

    Yeong-Sheng CHEN  Sheng-De WANG  Kuo-Chun SU  

     
    PAPER-Digital Signal Processing

      Vol:
    E78-A No:10
      Page(s):
    1369-1380

    This paper is concerned with synthesizing VLSI array processors from iterative algorithms. Our primary objective is to obtain the highest processor efficiency but not the shortest completion time. Unlike most of the previous work that assumes the index space of the given iterative algorithm to be boundless, the proposed method takes into account the effects of the boundaries of the index space. Due to this consideration, the pseudo-dependence relations are excluded, and most of the independent computations can therefore be uniformly grouped. With the method described in this paper, the index space is partitioned into equal-size blocks and the corresponding computations are systematically and uniformly mapped into processing elements. The synthesized VLSI array processors possess the attractive feature of very high processor efficiency, which, in general, is superior to what is derived from the conventional linear transformation methods.

901-920hit(993hit)