The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.40

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E74-A No.9  (Publication Date:1991/09/25)

    Special Issue on Information Theory and Its Applications
  • FOREWORD

    Hiroshi SATO  

     
    FOREWORD

      Page(s):
    2455-2455
  • Information Theory in Cryptology

    Hirosuke YAMAMOTO  

     
    INVITED PAPER

      Page(s):
    2456-2464

    Recent information theoretical topics in cryptology are surveyed. Coding theorems are reviewed for Shannon's cipher system, Simmons' theory of authenticity, the wiretap channel, the secret sharing communication system, etc.

  • A Converse on the Error Exponent for Binary Channels: An Approach Based on the Intersection of Spheres

    Te Sun HAN  Kingo KOBAYASHI  

     
    INVITED PAPER

      Page(s):
    2465-2472

    One of the basic problems in Information Theory, that is, the determination of the reliability function of binary symmetric channel, is studied by establishing the exponent of cardinality of intersection of two Hamming spheres.

  • Recent Applications of Error-Correcting Codes

    Hideki IMAI  

     
    INVITED PAPER

      Page(s):
    2473-2482

    Recent spread of applications of errorcorrecting codes is really remarkable. This is mainly due to the devolopment of LSI encoders and decoders (codecs) for errorcorrecting codes. This paper picks up some LSI codecs developed recently for Reed-Solomon codes, double-error-correcting Bose-Chaudhuri-Hocqeunghem codes, difference-set cyclic codes and convolutional codes for Viterbi decoding and discuss the related topics in applications of error-correcting codes. We also discuss remaining problems of coding theory which are important from the aspect of practical applications.

  • Dynamic Construction and Quick Search Algorithms of Binary Balanced Trees with a Fidelity Criterion

    Hisashi SUZUKI  Suguru ARIMOTO  

     
    PAPER

      Page(s):
    2483-2494

    This paper proposes () an algorithm that dynamically constructs an asymptotically-balanced binary tree to store successively-given keys without knowledge of the distribution of key occurence, and () another algorithm for quick key search over the constructed tree such that: If the tree has in memory at least a key that is inside the Δ-neighbor (in a Hamming space) of a reference key, then the algorithm can find one of such Δ-neighbor keys almost with probability 1. The memory capacity required to describe a tree in the tree construction algorithm is of order being proportional to the number l of keys already processed. For an independently and idenically distributed binary source of generating keys, the mean computation time required to update a tree for every key input can be of an order being a little higher than (log2l)2log2(log2l), and that required to search a Δ-neighbor key can be of an order being a little higher than (log2l)3.

  • An Adaptive Noiseless Coding for Sources with Big Alphabet Size

    Shuichi ITOH  Nobuyasu GOTO  

     
    PAPER

      Page(s):
    2495-2503

    When the alphabet of a source is very big, existing universal data compression algorithms fail to show their expected performances. The purpose of this paper is to provide an efficient coding algorithm even for sources with such a big alphabet size. We assume that the source has an independent identical smooth distribution but the distribution itself is unknown to us, and that the source symbols have a linear order. The algorithm is described in an adaptive enumerative coding flavor. Let Sn denote the set of observed past n samples and we are to encode a new sample x noiselessly. We look into Sn and decide the positional index of x among the ordered past n samples. This can be done by Cover's enumerative formula. Sn is divided into k groups of equal size according to that order. Then the probability that x falls into any particular group being almost equal to 1/k, the group number of x is encoded in fixed length, log k bits. Source alphabet is also partitioned into k sub-alphabets in accordance with the groups in Sn. Then the positional index of x in the selected sub-alphabet is encoded in variable length. The computational time complexity of the algorithm is linear to the input data length. The experimental analysis and discussion show superiority of our algorithm to existing ones for the sources in question. A modification to the base algorithm is also proposed for implementation purposes.

  • New Binary Encoding Schemes of Positive Integers

    Yoko MIYAKAWA  Yuichi SAITOH  Hideki IMAI  

     
    PAPER

      Page(s):
    2504-2512

    We propose five classes of variable length codes for encoding positive integers: One is a class of codes having constant Hamming weight. The other four classes of codes have a prefix and a suffix in a codeword. The prefix indicates the weight or the length of the suffix and variable length constant weight codes are used for the prefixes and the suffixes of some of these codes. For encoding of these codes we can use Schalkwijk's algorithm. It is shown that some of the proposed codes are universal and asymptotically optimal in the meaning that P. Elias has defined. We compare them with other known codes from the viewpoints of the length of the codewords corresponding to integers 2M (M=0, 1, , 30) and the average codeword length for two sources: 26 letters in English sentences and the geometrically distributed positive integers. The results shows that some of the proposed codes which are universal and asymptotically optimal are more efficient than the known codes.

  • Multi-Dimensional Lossy Coding via Copying with Its Practical Application to Interframe Low-Rate Video Compression

    Takahiro SAITO  Ryuji ABE  Takashi KOMATSU  Hiroshi HARASHIMA  

     
    PAPER

      Page(s):
    2513-2522

    We have recently extended one of the conceptions of the lossless universal pattern-matching coding, viz. the concept of coding via copying, to multi-dimensional lossy coding, and applied the extended concept to intraframe compression of still images. The work herein applies the extended concept of lossy coding via copying to interframe low-rate video compression, thus developing a novel low-rate interframe PMIC (pattern-matching image coding) technique, which produces the effect of generalizing the definition of a search area used in the existing block-matching motion compensation. We have experimentally shown the performance gain provided by the generalization within the framework of lossy coding via copying, and demonstrated that the interframe PMIC technique is usefull and potential as a basic means for low-rate video compression.

  • Optimal Decision Tree Design Based on Information Theoretical Cost Bound

    Satoru OHTA  Fumio KANAYA  

     
    PAPER

      Page(s):
    2523-2530

    Decision tree design is an important issue because decision trees have many applications in, for example, fault diagnosis and character recognition. This paper describes an algorithm designing a probabilistic decision tree, in which a test applied to a state produces different outcomes with non-zero probability. The algorithm strictly minimizes the tree cost, which is defined as the weighted sum of expected loss and expected test execution time. The lower bound of the cost is utillized to remove the excessive computing time needed to search for the exact optimum. This lower bound is derived from information theory and is transformed to an easily computable representation by introducing a network model. The result of a computational experiment confirms the time efficiency of the proposed method.

  • Higher Order Spectral Density Nulls and Spectral Lines

    Hiroshi KAMABE  

     
    PAPER

      Page(s):
    2531-2539

    Codes having order-K spectral density nulls, i.e., codes such that the power spectral density of the encoded sequences and its first 2K-1 derivatives vanish at rational submultiples f of the symbol frequency are investigated. Several necessary and sufficient conditions for a code to have an order-K spectral density null at f are given. A lower bound on the minimum Euclidean distance of a code with an integer alphabet and with an order-K spectral density null also is derived. Canonical diagrams for higher order spectral density nulls and nonzero spectral lines are introduced.

  • Innovation Models for Stochastic Process and Their Zeros

    Kuniharu KISHIDA  

     
    PAPER

      Page(s):
    2540-2546

    A relationship between two innovation representations is discussed in a stationary process, and a numerical algorithm of a new type innovation model is introduced for the time series analysis. Coefficient matrices of both innovation models are derived from correlation functions by using the singular value decomposition method of Hankel matrix and solving a Riccati type equation. Zeros of both models are also examined, since they have an important role not only in the analysis of AR model, but also in system diagnosis.

  • An Evaluation Method for Energy Stochastic System with Nonstationary Random Input and a Prediction of Response Probability Distribution for the Acoustic Insulation Wall Excited by a Road Traffic Noise

    Mitsuo OHTA  Kiminobu NISHIMURA  Kazutatsu HATAKEYAMA  

     
    PAPER

      Page(s):
    2547-2554

    A new trial of statistical evaluation for an output response of a stochastic linear system on an energy scale is proposed as a prediction method of its probability distribution function by using differential operator with respect to a distribution parameter. As the differential operator is independent of system variable, the statistical features like the nonstationarity, non-gamma property and various type linear and non-linear correlations of input signal reflected in the differential operator and the system dynamics can be treated independently of the system variable. Hence, the probability distribution of transmitted power is easily derived by operating nonstationary operator to the basic probability distribution of output response obtained by a pre-experiment with a test signal of standard gamma type. As an application to the actual noise environment, the proposed evaluation method is employed for the evaluation on a sound insulation system of double walls with sound bridge excited by a non-stationary road traffic noise.

  • Error Performance of Multi Stage Hard-Decision Bounded Distance Decoding for Multi-Level Block Modulation Codes

    Mitsuteru KATAOKA  Toyoo TAKATA  Tadao KASAMI  Satoshi UJITA  

     
    PAPER

      Page(s):
    2555-2562

    In this paper, we analyze a multi-stage hard-decision bounded-distance decoding method for multi-level block modulation codes. Each stage of the decoding is performed by a combination of a hard-decision for each received symbol and an algebraic decoding for a component code. Error performance of multi-level block modulation codes is analyzed for a memoryless additive channel based on the multi-stage decoding algorithm presented here. Formulas are derived for the probability of erroneous decoding and that of decoding failure. Then these probabilities are computed for some specific 8-PSK block modulation codes.

  • Block Decodable Unit Memory Codes

    Takeshi HASHIMOTO  

     
    PAPER

      Page(s):
    2563-2570

    UM codes, which are block-oriented convolutional codes with branches as long as a block code, are known to possess large minimum distances compared to the ordinary convolutional codes, but optimal decoding of UM codes is generally quite difficult since the number of decoder states is an exponential function of the branch length. In this paper, a class of UM codes which are partially decodable with block decoders, termed the PBD-UM code, is investigated and shown to be effective. Such a code, for example, can be constructed from the BCH code and decoded by a decoder consisting of a number of BCH decoders. Since the BCH code is easy to decode, the PBD-UM code constructed in this manner, is also relatively easy to decode. As a specific example, a decoding scheme based on Tanaka's step-by-step soft-decision decoding algorithm is proposed and investigated through simulation.

  • New Decoding Techniques for Punctured Convolutional Codes

    Pisit CHARN KEIT KONG  Yuichi SAITOH  Kazuhiko YAMAGUCHI  Hideki IMAI  

     
    PAPER

      Page(s):
    2571-2578

    In this paper, a rate k/n punctured convolutional code is considered as a time-varying code with period k. The punctured convolutional codes with time-varying constraint length are studied. For these codes, new maximum likelihood decoding techniques are proposed. Not only for the binary symmetric channel but also for any discrete memoryless channel, these decoding techniques can periodically reduce the number of survivors and comparisons in the decoding process. On the basis of the proposed decoding techniques, new classes of punctured convolutional codes are introduced. Computer searches are perfermed to construct good codes from these classes.

  • An Arrangement Technique of Gray-Code Table for Signal Constellation of Modified QAM and Triangular-Shaped Signal Set

    Takaya YAMAZATO  Shinjiro OSHITA  Iwao SASASE  Shinsaku MORI  

     
    PAPER

      Page(s):
    2579-2585

    An arrangement technique of Gray-code table for signal constellations of modified QAM and Triangular-Shaped Signal Set is proposed. This is an optimum mapping technique for both two dimensional rectangular and hexagonal lattice-based constellations which cannot adapt the conventional Gray-code mapping. As our technique achieves most of neighboring symbols differ by one bit, the bit-error-probability (BEP) of the system becomes nearly equal to the symbol-error-probability (SEP). Furthermore, in consideration of this technique, we can have a constellation by which all neighboring symbols differ only by one bit for the rectangular lattice based constellation, but error-performances are worse than those of the original constellations because of a decrease of the corresponding decision regions.

  • Properties of Photon Distribution of Squeezed State in Loss Channel

    Seitetsu BUN  Kazushige YONENAGA  Kouichi YAMAZAKI  

     
    PAPER

      Page(s):
    2586-2595

    In this paper, we analyze photon distributions of squeezed state from a viewpoint of quantum state control communications (QSCC). Especially, the effect of the channel loss on these properties are shown in transmitted quantum state control (TQSC). As a result, we show the general property of the photon distribution of squeezed state suffering the loss effect in channel with respect to the transparence efficiency. Although the photon distribution of squeezed state suffering the loss effect approaches the Poissonian one, the photon distribution on the way becomes a remarkable one in some region of a squeezing parameter. Then we show optimum conditions of squeezed state based on Signal to noise ratio (SNR) in the photon counting system. It is found that the degree of the squeezing to minimize error probability is much smaller than that for SNR. Moreover, we compare properties of the photon counting system using a squeezer as a preamplifier with those using a general linear optical amplifier.

  • Crosscorrelation Cancellation in SS/DS Block Demodulator

    Akihiro KAJIWARA  Masao NAKAGAWA  

     
    PAPER

      Page(s):
    2596-2602

    This paper shows direct-sequence spread-spectrum (SS/DS) block demodulator employing crosscorrelation canceller. The crosscorrelation cancellation can be realized by adding a matrix calculator in conventional SS/DS block demodulator. In the SS/DS block demodulator, PN-codes assigned to all SS/DS users can be detected by processing the received signal stored in memory recursively. Next the correlation values between PN-codes are calculated, and then the received signal is processed algebraically by the cancellation matrix composed of the correlation values. As a result, it can provide high capacity per unit bandwidth. The simulation results show that the capacity per unit bandwidth is around 45% when PN-code length is 127 chips (Process Gain = 21 dB) and input SNR is higher than -6 dB, and the good near-far resistance capability can be achieved.

  • Regular Section
  • A CMOS Cell Compiler for a Bit-Mapping CAD System

    Cong-Kha PHAM  Katsufusa SHONO  

     
    PAPER-Computer Aided Design (CAD)

      Page(s):
    2603-2611

    In this paper, the CMOS Cell Compiler (CCC) which was developed on our bit-mapping CAD is introduced. The CCC generates the physical layout for a logic cell from a functional description expressed by a set of Boolean equations. A CMOS digital LSI having up to 104 transistors can be designed on this bit-mapping CAD by placing a physical layout of cells generated on the bit-map editor and by giving interconnections manually among the cells. The physical layout obeys λ-base design rule on a bit-map grid plane and can support single-and double-metal CMOS process. An aspect ratio and a position of input and output terminals in a rectangle cell are depending on functional description of Boolean equations.

  • Efficient Detection and Signal Parameter Estimation with Application to High Dynamic GPS Receivers

    Rajendra KUMAR  

     
    PAPER-Digital Signal Processing

      Page(s):
    2612-2624

    This paper presents a novel technique for simultaneously detecting data and estimating the parameters of a received carrier signal phase modulated by unknown data and experiencing very high Doppler, Doppler rate etc. Such a situation arises, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and acceleration of the GPS receiver. The proposed scheme is based upon first estimating the received signal local (data dependent) parameters over two consecutive bit periods, followed by the detection of a possible jump in these parameters. The presence of the detected jump signifies a data transition which is then removed from the received signal. This effectively demodulated signal is then processed to provide the entimates of global (data independent) parameters of the signal related to the position, velocity etc. of the receiver. One of the key features of the proposed algorithm is the introduction of two different and equivalent schemes which can provide an improvement of up to 3 dB over the conventional implementation of Kalman filter as applied to phase and frequency estimation, under low to medium signal-to-noise ratio conditions. One scheme is based upon reprocessing (cycling) the measurements over an optimally selected interval while the alternative scheme proposes an adaptive Hilbert transform technique. The overall complexity of the proposed algorithm is about three times the complexity of a single third order Kalman filter and is thus comparable to that of a scheme based on Fast Fourier Transform technique proposed earlier for such a problem. However, the proposed scheme provides an improvement of about 6 dB in terms of carrier power to noise power spectral density ratio (CNR) over the earlier scheme of the literature. The proposed scheme also provides the carrier phase estimate unlike the previous scheme.

  • A Dual-Probe Blood Flow-Mapping System Using Doppler Ultrasound

    Kunihiro CHIHARA  Kimisuke SHIRAE  

     
    PAPER-Ergononics

      Page(s):
    2625-2633

    A major problem in Doppler ultrasound has been the lack of quantitation of the magnitude of the blood velocity vector, particularly in intracardiac flow where the flow direction can't be estimated. This paper derives a new flow-mapping system for visualizing the two-dimensional blood velocity vector. The resultant velocity vector is the projection of the three-dimensional velocity vector on the beam scanning plane.

  • Electrotropism by DC Electric Field in a Root of the Higher Plant

    Shu EZAKI  Kiyoshi TOKO  Kaoru YAMAFUJI  

     
    PAPER-Nonlinear Circuits and Systems

      Page(s):
    2634-2641

    Primary roots of adzuki bean were incubated under an external electric field in the transverse direction across the root. A root was laid horizontally in a aqueous solution, and the electric field was generated by flowing the electric current in the solution. Roots bent to the side of the positive electrode. This bending tendency increased with increasing the electric field. The surface-electric potential and pH distributions were measured under the electric field and the normal condition. The application of electric field caused the asymmetries of the electric potential pattern and the acidification. At the activated side in the growth, corresponding to the electrically negative side of the root, the amplitudes of the potential and the acidification were found to increase whereas the opposite tendency was observed at the opposite side. The results with a theoretical consideration suggest that the symmetry of the growth was broken by an asymmetrical H+ re-distribution made by the external electrical force.

  • Task Assignment in Homogeneous Linear Array Networks

    Bog-Lae JO  Cheol-Hoon LEE  Dongmyun LEE  Myunghwan KIM  

     
    LETTER-Algorithms, Data Structures and Computational Complexity

      Page(s):
    2642-2648

    This letter deals with the problem of assigning tasks to the processors of a distributed computing system such that the sum of execution and communication costs is minimized. If the number of processors is two, this problem can be solved efficiently using the network flow approach pioneered by Stone. This problem is, however, known to be NP-complete in the general case, and thus intractable for systems with a large number of processors. Recently, an optimal algorithm with the time complexity of O(n2m3 log n) has been suggested for the problem of assigning m interacting tasks to a linear array of n processors. They solved the problem by using the two-terminal network flow approach. In this letter, we propose a multiterminal network flow approach for the task assignment problem in homogeneous linear array networks. The task assignment problem for a homogeneous linear array network of n processors is first transformed into (n-1) two-terminal network flow problems, and then solved in time no worse than O(nm3) by applying the Goldberg-Tarjan's network flow algorithm for each two-terminal network flow problem.

  • Cepstrum of Mixed Phase Signal Sequence Based on Moments

    Anil KHARE  Toshinori YOSHIKAWA  

     
    LETTER-Digital Signal Processing

      Page(s):
    2649-2651

    It is known that the moments of the cepstrum of a mixed phase signal sequence can be calculated without directly calculating the cepstrum, avoiding the phase unwrapping algorithm and the aliasing error etc. Using this, moments of cepstrum of the minimum phase sequence are calculated from the linear phase sequence without actually calculating the minimum phase sequence.

  • A Method of Designing IIR Digital Filters with the Minimum Number of Interpolation Points

    Yoshiro SUHARA  

     
    LETTER-Digital Signal Processing

      Page(s):
    2652-2654

    This letter presents a method of optimal design of IIR digital filters with the minimum number of interpolation points in magnitude response. Those filters are classified into four types and each of them is designed by a simple formula. Maximal flatness, ripple property, filter order, and steepness in transition band are discussed.

  • Method for Measuring Glossiness of Spherical Surfaces Utilizing Brightness Pattern of Image

    Hideo KUGISAWA  Teizo AIDA  Masanao EBATA  

     
    LETTER-Human Communication

      Page(s):
    2655-2662

    The human judges generally the gloss by the distinctness degree of the image projected on the specimen surface. If the CCD camera is used instead of the human eyes, the distinctness degree of the image will relate deeply to the brightness pattern formed on the CCD camera. Therefore, first, the brightness pattern on the CCD camera was obtained theoretically. Utilizing the calculated brightness pattern, we defined newly the brightness pattern glossiness GBP which was applicable to the spherical specimens. Next, the validity of the GBP was confirmed by the experiments which used the enamel painted balls and the spherical pearls.