The search functionality is under construction.

Author Search Result

[Author] Hideki YAGI(22hit)

1-20hit(22hit)

  • InP-Based Monolithic Integration Technologies for 100/200Gb/s Pluggable Coherent Transceivers Open Access

    Hideki YAGI  Yoshihiro YONEDA  Mitsuru EKAWA  Hajime SHOJI  

     
    INVITED PAPER

      Vol:
    E100-C No:2
      Page(s):
    179-186

    This paper reports dual-polarization In-phase and Quadrature (DP-IQ) modulators and photodetectors integrated with the 90° hybrid using InP-based monolithic integration technologies for 100/200Gb/s coherent transmission. The DP-IQ modulator was monolithically integrated with the Mach-Zehnder modulator array consisting of deep-ridge waveguides formed through dry etching and benzocyclobutene planarization processes. This DP-IQ modulator exhibited the low half-wavelength voltage (Vπ=1.5V) and the wide 3-dB bandwidth (f3dB > 28GHz). The photodetector monolithically integrated with the 90° hybrid consisting of multimode interference structures was realized by the butt-joint regrowth. A responsivity including total loss of 7.9dB in the waveguide was as high as 0.155A/W at a wavelength of 1550nm, and responsivity imbalance of the In-phase and Quadrature channels was less than ±0.5dB over the C-band. In addition, the low dark current (less than 500pA up to 85°C @ -3.0V) and the stable operation in the accelerated aging test (test condition: -5V at 175°C) over 5,000h were successfully achieved for the p-i-n-photodiode array with a buried heterostructure formed through the selective embedding regrowth. Finally, a receiver responsivity including intrinsic loss of 3dB in the polarization beam splitter was higher than 0.070A/W at a wavelength of 1550nm through the integration of the spot-size converter, and demodulation of 128Gb/s DP-QPSK and 224Gb/s DP-16QAM modulated signals was demonstrated for the compact coherent receiver using this photodetector integrated with the 90° hybrid. Therefore, we indicated that these InP-based monolithically integrated photonic devices are very useful for 100/200Gb/s pluggable coherent transceivers.

  • A Heuristic Search Method with the Reduced List of Test Error Patterns for Maximum Likelihood Decoding

    Hideki YAGI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E88-A No:10
      Page(s):
    2721-2733

    The reliability-based heuristic search methods for maximum likelihood decoding (MLD) generate test error patterns (or, equivalently, candidate codewords) according to their heuristic values. Test error patterns are stored in lists and its space complexity is crucially large for MLD of long block codes. Based on the decoding algorithms both of Battail and Fang and of its generalized version suggested by Valembois and Fossorier, we propose a new method for reducing the space complexity of the heuristic search methods for MLD including the well-known decoding algorithm of Han et al. If the heuristic function satisfies a certain condition, the proposed method guarantees to reduce the space complexity of both the Battail-Fang and Han et al. decoding algorithms. Simulation results show the high efficiency of the proposed method.

  • Fingerprinting Codes for Multimedia Data against Averaging Attack

    Hideki YAGI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Application

      Vol:
    E92-A No:1
      Page(s):
    207-216

    Code construction for digital fingerprinting, which is a copyright protection technique for multimedia, is considered. Digital fingerprinting should deter collusion attacks, where several fingerprinted copies of the same content are mixed to disturb their fingerprints. In this paper, we consider the averaging attack, which is known to be effective for multimedia fingerprinting with the spread spectrum technique. We propose new methods for constructing fingerprinting codes to increase the coding rate of conventional fingerprinting codes, while they guarantee to identify the same number of colluders. Due to the new fingerprinting codes, the system can deal with a larger number of users to supply digital contents.

  • InP-Based Photodetectors Monolithically Integrated with 90° Hybrid toward Over 400Gb/s Coherent Transmission Systems Open Access

    Hideki YAGI  Takuya OKIMOTO  Naoko INOUE  Koji EBIHARA  Kenji SAKURAI  Munetaka KUROKAWA  Satoru OKAMOTO  Kazuhiko HORINO  Tatsuya TAKEUCHI  Kouichiro YAMAZAKI  Yoshifumi NISHIMOTO  Yasuo YAMASAKI  Mitsuru EKAWA  Masaru TAKECHI  Yoshihiro YONEDA  

     
    INVITED PAPER

      Vol:
    E102-C No:4
      Page(s):
    347-356

    We present InP-based photodetectors monolithically integrated with a 90° hybrid toward over 400Gb/s coherent transmission systems. To attain a wide 3-dB bandwidth of more than 40GHz for 400Gb/s dual-polarization (DP)-16-ary quadrature amplitude modulation (16QAM) and 600Gb/s DP-64QAM through 64GBaud operation, A p-i-n photodiode structure consisting of a GaInAs thin absorption and low doping n-typed InP buffer layers was introduced to overcome the trade-off between short carrier transit time and low parasitic capacitance. Additionally, this InP buffer layer contributes to the reduction of propagation loss in the 90° hybrid waveguide, that is, this approach allows a high responsivity as well as wide 3-dB bandwidth operation. The coherent receiver module for the C-band (1530nm - 1570nm) operation indicated the wide 3-dB bandwidth of more than 40GHz and the high receiver responsivity of more than 0.070A/W (Chip responsivity within the C-band: 0.130A/W) thanks to photodetectors with this photodiode design. To expand the usable wavelengths in wavelength-division multiplexing toward large-capacity optical transmission, the photodetector integrated with the 90° hybrid optimized for the L-band (1565nm - 1612nm) operation was also fabricated, and exhibited the high responsivity of more than 0.120A/W over the L-band. Finally, the InP-based monolithically integrated photonic device consisting of eight-channel p-i-n photodiodes, two 90° hybrids and a beam splitter was realized for the miniaturization of modules and afforded the reduction of the total footprint by 70% in a module compared to photodetectors with the 90° hybrid and four-channel p-i-n photodiodes.

  • A Method for Grouping Symbol Nodes of Group Shuffled BP Decoding Algorithm

    Yoshiyuki SATO  Gou HOSOYA  Hideki YAGI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E91-A No:10
      Page(s):
    2745-2753

    In this paper, we propose a method for enhancing performance of a sequential version of the belief-propagation (BP) decoding algorithm, the group shuffled BP decoding algorithm for low-density parity-check (LDPC) codes. An improved BP decoding algorithm, called the shuffled BP decoding algorithm, decodes each symbol node in serial at each iteration. To reduce the decoding delay of the shuffled BP decoding algorithm, the group shuffled BP decoding algorithm divides all symbol nodes into several groups. In contrast to the original group shuffled BP, which automatically generates groups according to symbol positions, in this paper we propose a method for grouping symbol nodes which generates groups according to the structure of a Tanner graph of the codes. The proposed method can accelerate the convergence of the group shuffled BP algorithm and obtain a lower error rate in a small number of iterations. We show by simulation results that the decoding performance of the proposed method is improved compared with those of the shuffled BP decoding algorithm and the group shuffled BP decoding algorithm.

  • Single-Letter Characterizations for Information Erasure under Restriction on the Output Distribution

    Naruaki AMADA  Hideki YAGI  

     
    PAPER-Information Theory

      Pubricized:
    2020/11/09
      Vol:
    E104-A No:5
      Page(s):
    805-813

    In order to erase data including confidential information stored in storage devices, an unrelated and random sequence is usually overwritten, which prevents the data from being restored. The problem of minimizing the cost for information erasure when the amount of information leakage of the confidential information should be less than or equal to a constant asymptotically has been introduced by T. Matsuta and T. Uyematsu. Whereas the minimum cost for overwriting has been given for general sources, a single-letter characterization for stationary memoryless sources is not easily derived. In this paper, we give single-letter characterizations for stationary memoryless sources under two types of restrictions: one requires the output distribution of the encoder to be independent and identically distributed (i.i.d.) and the other requires it to be memoryless but not necessarily i.i.d. asymptotically. The characterizations indicate the relation among the amount of information leakage, the minimum cost for information erasure and the rate of the size of uniformly distributed sequences. The obtained results show that the minimum costs are different between these restrictions.

  • Decision Feedback Scheme with Criterion LR+Th for the Ensemble of Linear Block Codes

    Toshihiro NIINOMI  Hideki YAGI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E103-A No:1
      Page(s):
    334-345

    In decision feedback scheme, Forney's decision criterion (Forney's rule: FR) is optimal in the sense that the Neyman-Pearson's lemma is satisfied. Another prominent criterion called LR+Th was proposed by Hashimoto. Although LR+Th is suboptimal, its error exponent is shown to be asymptotically equivalent to that of FR by random coding arguments. In this paper, applying the technique of the DS2 bound, we derive an upper bound for the error probability of LR+Th for the ensemble of linear block codes. Then we can observe the new bound from two significant points of view. First, since the DS2 type bound can be expressed by the average weight distribution whose code length is finite, we can compare the error probability of FR with that of LR+Th for the fixed-length code. Second, the new bound elucidates the relation between the random coding exponents of block codes and those of linear block codes.

  • Adaptive Decoding Algorithms for Low-Density Parity-Check Codes over the Binary Erasure Channel

    Gou HOSOYA  Hideki YAGI  Manabu KOBAYASHI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E92-A No:10
      Page(s):
    2418-2430

    Two decoding procedures combined with a belief-propagation (BP) decoding algorithm for low-density parity-check codes over the binary erasure channel are presented. These algorithms continue a decoding procedure after the BP decoding algorithm terminates. We derive a condition that our decoding algorithms can correct an erased bit which is uncorrectable by the BP decoding algorithm. We show by simulation results that the performance of our decoding algorithms is enhanced compared with that of the BP decoding algorithm with little increase of the decoding complexity.

  • Variable-Length Coding with Cost Allowing Non-Vanishing Error Probability

    Hideki YAGI  Ryo NOMURA  

     
    PAPER-Information Theory

      Vol:
    E100-A No:8
      Page(s):
    1683-1692

    We consider fixed-to-variable length coding with a regular cost function by allowing the error probability up to any constantε. We first derive finite-length upper and lower bounds on the average codeword cost, which are used to derive general formulas of two kinds of minimum achievable rates. For a fixed-to-variable length code, we call the set of source sequences that can be decoded without error the dominant set of source sequences. For any two regular cost functions, it is revealed that the dominant set of source sequences for a code attaining the minimum achievable rate under a cost function is also the dominant set for a code attaining the minimum achievable rate under the other cost function. We also give general formulas of the second-order minimum achievable rates.

  • A Generalization of the Parallel Error Correcting Codes by Allowing Some Random Errors

    Hideki YAGI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER

      Vol:
    E90-A No:9
      Page(s):
    1745-1753

    This paper generalizes parallel error correcting codes proposed by Ahlswede et al. over a new type of multiple access channel called parallel error channel. The generalized parallel error correcting codes can handle with more errors compared with the original ones. We show construction methods of independent and non-independent parallel error correcting codes and decoding methods. We derive some bounds about the size of respective parallel error correcting codes. The obtained results imply a single parallel error correcting code can be constructed by two or more kinds of error correcting codes with distinct error correcting capabilities.

  • An Improved Method of Reliability-Based Maximum Likelihood Decoding Algorithms Using an Order Relation among Binary Vectors

    Hideki YAGI  Manabu KOBAYASHI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E87-A No:10
      Page(s):
    2493-2502

    Reliability-based maximum likelihood decoding (MLD) algorithms of linear block codes have been widely studied. These algorithms efficiently search the most likely codeword using the generator matrix whose most reliable and linearly independent k (dimension of the code) columns form the identity matrix. In this paper, conditions for omitting unnecessary metrics computation of candidate codewords are derived in reliability-based MLD algorithms. The proposed conditions utilize an order relation of binary vectors. A simple method for testing if the proposed conditions are satisfied is devised. The method for testing proposed conditions requires no real number operations and, consequently, the MLD algorithm employing this method reduces the number of real number operations, compared to known reliability-based MLD algorithms.

  • Fast Algorithm for Generating Candidate Codewords in Reliability-Based Maximum Likelihood Decoding

    Hideki YAGI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    LETTER-Coding Theory

      Vol:
    E89-A No:10
      Page(s):
    2676-2683

    We consider the reliability-based heuristic search methods for maximum likelihood decoding, which generate test error patterns (or, equivalently, candidate codewords) according to their heuristic values. Some studies have proposed methods for reducing the space complexity of these algorithms, which is crucially large for long block codes at medium to low signal to noise ratios of the channel. In this paper, we propose a new method for reducing the time complexity of generating candidate codewords by storing some already generated candidate codewords. Simulation results show that the increase of memory size is small.

  • Upper Bounds on the Error Probability for the Ensemble of Linear Block Codes with Mismatched Decoding Open Access

    Toshihiro NIINOMI  Hideki YAGI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Pubricized:
    2021/10/08
      Vol:
    E105-A No:3
      Page(s):
    363-371

    In channel decoding, a decoder with suboptimal metrics may be used because of the uncertainty of the channel statistics or the limitations of the decoder. In this case, the decoding metric is different from the actual channel metric, and thus it is called mismatched decoding. In this paper, applying the technique of the DS2 bound, we derive an upper bound on the error probability of mismatched decoding over a regular channel for the ensemble of linear block codes, which was defined by Hof, Sason and Shamai. Assuming the ensemble of random linear block codes defined by Gallager, we show that the obtained bound is not looser than the conventional bound. We also give a numerical example for the ensemble of LDPC codes also introduced by Gallager, which shows that our proposed bound is tighter than the conventional bound. Furthermore, we obtain a single letter error exponent for linear block codes.

  • Density Evolution Analysis of Robustness for LDPC Codes over the Gilbert-Elliott Channel

    Manabu KOBAYASHI  Hideki YAGI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E91-A No:10
      Page(s):
    2754-2764

    In this paper, we analyze the robustness for low-density parity-check (LDPC) codes over the Gilbert-Elliott (GE) channel. For this purpose we propose a density evolution method for the case where LDPC decoder uses the mismatched parameters for the GE channel. Using this method, we derive the region of tuples of true parameters and mismatched decoding parameters for the GE channel, where the decoding error probability approaches asymptotically to zero.

  • A Modification Method for Constructing Low-Density Parity-Check Codes for Burst Erasures

    Gou HOSOYA  Hideki YAGI  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E89-A No:10
      Page(s):
    2501-2509

    We study a modification method for constructing low-density parity-check (LDPC) codes for solid burst erasures. Our proposed modification method is based on a column permutation technique for a parity-check matrix of the original LDPC codes. It can change the burst erasure correction capabilities without degradation in the performance over random erasure channels. We show by simulation results that the performance of codes permuted by our method are better than that of the original codes, especially with two or more solid burst erasures.

  • Reliability Function and Strong Converse of Biometrical Identification Systems Based on List-Decoding

    Vamoua YACHONGKA  Hideki YAGI  

     
    LETTER-Information Theory

      Vol:
    E100-A No:5
      Page(s):
    1262-1266

    The biometrical identification system, introduced by Willems et al., is a system to identify individuals based on their measurable physical characteristics. Willems et al. characterized the identification capacity of a discrete memoryless biometrical identification system from information theoretic perspectives. Recently, Mori et al. have extended this scenario to list-decoding whose list size is an exponential function of the data length. However, as the data length increases, how the maximum identification error probability (IEP) behaves for a given rate has not yet been characterized for list-decoding. In this letter, we investigate the reliability function of the system under fixed-size list-decoding, which is the optimal exponential behavior of the maximum IEP. We then use Arimoto's argument to analyze a lower bound on the maximum IEP with list-decoding when the rate exceeds the capacity, which leads to the strong converse theorem. All results are derived under the condition that an unknown individual need not be uniformly distributed and the identification process is done without the knowledge of the prior distribution.

  • Construction of Locally Repairable Codes with Multiple Localities Based on Encoding Polynomial

    Tomoya HAMADA  Hideki YAGI  

     
    PAPER-Coding theory and techniques

      Vol:
    E101-A No:12
      Page(s):
    2047-2054

    Locally repairable codes, which can repair erased symbols from other symbols, have attracted a good deal of attention in recent years because its local repair property is effective on distributed storage systems. (ru, δu)u∈[s]-locally repairable codes with multiple localities, which are an extension of ordinary locally repairable codes, can repair δu-1 erased symbols simultaneously from a set consisting of at most ru symbols. An upper bound on the minimum distance of these codes and a construction method of optimal codes, attaining this bound with equality, were given by Chen, Hao, and Xia. In this paper, we discuss the parameter restrictions of the existing construction, and we propose explicit constructions of optimal codes with multiple localities with relaxed restrictions based on the encoding polynomial introduced by Tamo and Barg. The proposed construction can design a code whose minimum distance is unrealizable by the existing construction.

  • On the DS2 Bound for Forney's Generalized Decoding Using Non-Binary Linear Block Codes

    Toshihiro NIINOMI  Hideki YAGI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E101-A No:8
      Page(s):
    1223-1234

    Recently, Hof et al. extended the type-2 Duman and Salehi (DS2) bound to generalized decoding, which was introduced by Forney, with decision criterion FR. From this bound, they derived two significant bounds. One is the Shulman-Feder bound for generalized decoding (GD) with the binary-input output-symmetric channel. The other is an upper bound for an ensemble of linear block codes, by applying the average complete weight distribution directly to the DS2 bound for GD. For the Shulman-Feder bound for GD, the authors derived a condition under which an upper bound is minimized at an intermediate step and show that this condition yields a new bound which is tighter than Hof et al.'s bound. In this paper, we first extend this result for non-binary linear block codes used over a class of symmetric channels called the regular channel. Next, we derive a new tighter bound for an ensemble of linear block codes, which is based on the average weight distribution.

  • Fundamental Limits of Biometric Identification System Under Noisy Enrollment

    Vamoua YACHONGKA  Hideki YAGI  

     
    PAPER-Information Theory

      Pubricized:
    2020/07/14
      Vol:
    E104-A No:1
      Page(s):
    283-294

    In this study, we investigate fundamental trade-off among identification, secrecy, template, and privacy-leakage rates in biometric identification system. Ignatenko and Willems (2015) studied this system assuming that the channel in the enrollment process of the system is noiseless and they did not consider the template rate. In the enrollment process, however, it is highly considered that noise occurs when bio-data is scanned. In this paper, we impose a noisy channel in the enrollment process and characterize the capacity region of the rate tuples. The capacity region is proved by a novel technique via two auxiliary random variables, which has never been seen in previous studies. As special cases, the obtained result shows that the characterization reduces to the one given by Ignatenko and Willems (2015) where the enrollment channel is noiseless and there is no constraint on the template rate, and it also coincides with the result derived by Günlü and Kramer (2018) where there is only one individual.

  • Biometric Identification Systems with Both Chosen and Generated Secret Keys by Allowing Correlation

    Vamoua YACHONGKA  Hideki YAGI  

     
    PAPER-Shannon Theory

      Pubricized:
    2022/09/06
      Vol:
    E106-A No:3
      Page(s):
    382-393

    We propose a biometric identification system where the chosen- and generated-secret keys are used simultaneously, and investigate its fundamental limits from information theoretic perspectives. The system consists of two phases: enrollment and identification phases. In the enrollment phase, for each user, the encoder uses a secret key, which is chosen independently, and the biometric identifier to generate another secret key and a helper data. In the identification phase, observing the biometric sequence of the identified user, the decoder estimates index, chosen- and generated-secret keys of the identified user based on the helper data stored in the system database. In this study, the capacity region of such system is characterized. In the problem settings, we allow chosen- and generated-secret keys to be correlated. As a result, by permitting the correlation of the two secret keys, the sum rate of the identification, chosen- and generated-secret key rates can achieve a larger value compared to the case where the keys do not correlate. Moreover, the minimum amount of the storage rate changes in accordance with both the identification and chosen-secret key rates, but that of the privacy-leakage rate depends only on the identification rate.

1-20hit(22hit)