The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E106-A No.3  (Publication Date:2023/03/01)

    Special Section on Cryptography and Information Security
  • FOREWORD Open Access

    Tetsuya IZU  

     
    FOREWORD

      Page(s):
    169-169
  • APVAS: Reducing the Memory Requirement of AS_PATH Validation by Introducing Aggregate Signatures into BGPsec

    Ouyang JUNJIE  Naoto YANAI  Tatsuya TAKEMURA  Masayuki OKADA  Shingo OKAMURA  Jason Paul CRUZ  

     
    PAPER

      Pubricized:
    2023/01/11
      Page(s):
    170-184

    The BGPsec protocol, which is an extension of the border gateway protocol (BGP) for Internet routing known as BGPsec, uses digital signatures to guarantee the validity of routing information. However, the use of digital signatures in routing information on BGPsec causes a lack of memory in BGP routers, creating a gaping security hole in today's Internet. This problem hinders the practical realization and implementation of BGPsec. In this paper, we present APVAS (AS path validation based on aggregate signatures), a new protocol that reduces the memory consumption of routers running BGPsec when validating paths in routing information. APVAS relies on a novel aggregate signature scheme that compresses individually generated signatures into a single signature. Furthermore, we implement a prototype of APVAS on BIRD Internet Routing Daemon and demonstrate its efficiency on actual BGP connections. Our results show that the routing tables of the routers running BGPsec with APVAS have 20% lower memory consumption than those running the conventional BGPsec. We also confirm the effectiveness of APVAS in the real world by using 800,000 routes, which are equivalent to the full route information on a global scale.

  • Solving the Problem of Blockwise Isomorphism of Polynomials with Circulant Matrices

    Yasufumi HASHIMOTO  

     
    PAPER

      Pubricized:
    2022/10/07
      Page(s):
    185-192

    The problem of Isomorphism of Polynomials (IP problem) is known to be important to study the security of multivariate public key cryptosystems, one of the major candidates of post-quantum cryptography, against key recovery attacks. In these years, several schemes based on the IP problem itself or its generalization have been proposed. At PQCrypto 2020, Santoso introduced a generalization of the problem of Isomorphism of Polynomials, called the problem of Blockwise Isomorphism of Polynomials (BIP problem), and proposed a new Diffie-Hellman type encryption scheme based on this problem with Circulant matrices (BIPC problem). Quite recently, Ikematsu et al. proposed an attack called the linear stack attack to recover an equivalent key of Santoso's encryption scheme. While this attack reduced the security of the scheme, it does not contribute to solving the BIPC problem itself. In the present paper, we describe how to solve the BIPC problem directly by simplifying the BIPC problem due to the conjugation property of circulant matrices. In fact, we experimentally solved the BIPC problem with the parameter, which has 256 bit security by Santoso's security analysis and has 72.7bit security against the linear stack attack, by about 10 minutes.

  • A Generic Construction of CCA-Secure Identity-Based Encryption with Equality Test against Insider Attacks

    Keita EMURA  Atsushi TAKAYASU  

     
    PAPER

      Pubricized:
    2022/05/30
      Page(s):
    193-202

    Identity-based encryption with equality test (IBEET) is a generalization of the traditional identity-based encryption (IBE) and public key searchable encryption, where trapdoors enable users to check whether two ciphertexts of distinct identities are encryptions of the same plaintext. By definition, IBEET cannot achieve indistinguishability security against insiders, i.e., users who have trapdoors. To address this issue, IBEET against insider attacks (IBEETIA) was later introduced as a dual primitive. While all users of IBEETIA are able to check whether two ciphertexts are encryptions of the same plaintext, only users who have tokens are able to encrypt plaintexts. Hence, IBEETIA is able to achieve indistinguishability security. On the other hand, the definition of IBEETIA weakens the notion of IBE due to its encryption inability. Nevertheless, known schemes of IBEETIA made use of rich algebraic structures such as bilinear groups and lattices. In this paper, we propose a generic construction of IBEETIA without resorting to rich algebraic structures. In particular, the only building blocks of the proposed construction are symmetric key encryption and pseudo-random permutations in the standard model. If a symmetric key encryption scheme satisfies CCA security, our proposed IBEETIA scheme also satisfies CCA security.

  • A New Analysis of the Kipnis-Shamir Method Solving the MinRank Problem

    Shuhei NAKAMURA  Yacheng WANG  Yasuhiko IKEMATSU  

     
    PAPER

      Pubricized:
    2022/09/29
      Page(s):
    203-211

    The MinRank problem is investigated as a problem related to rank attacks in multivariate cryptography and the decoding of rank codes in coding theory. The Kipnis-Shamir method is one of the methods to solve the problem, and recently, significant progress has been made in its complexity estimation by Verbel et al. As this method reduces the problem to an MQ problem, which asks for a solution to a system of quadratic equations, its complexity depends on the solving degree of a quadratic system deduced from the method. A theoretical value introduced by Verbel et al. approximates the minimal solving degree of the quadratic systems in the method although their value is defined under a certain limit for the system considered. A quadratic system outside their limitation often has a larger solving degree, but the solving complexity is not always higher because it has a smaller number of variables and equations. Thus, in order to discuss the best complexity of the Kipnis-Shamir method, a theoretical value is needed to approximate the solving degree of each quadratic system deduced from the method. A quadratic system deduced from the Kipnis-Shamir method always has a multi-degree, and the solving complexity is influenced by this property. In this study, we introduce a theoretical value defined by such a multi-degree and show that it approximates the solving degree of each quadratic system. Thus, the systems deduced from the method are compared, and the best complexity is discussed. As an application, for the MinRank attack using the Kipnis-Shamir method against the multivariate signature scheme Rainbow, we show a case in which a deduced quadratic system outside Verbel et al.'s limitation is the best. In particular, the complexity estimation of the MinRank attack using the KS method against the Rainbow parameter sets I, III and V is reduced by about 172, 140 and 212 bits, respectively, from Verbel et al.'s estimation.

  • PoS Blockchain-Based Forward-Secure Public Key Encryption with Immutable Keys and Post-Compromise Security Guarantees

    Seiya NUTA  Jacob C. N. SCHULDT  Takashi NISHIDE  

     
    PAPER

      Pubricized:
    2022/11/09
      Page(s):
    212-227

    We present a forward-secure public-key encryption (PKE) scheme without key update, i.e. both public and private keys are immutable. In contrast, prior forward-secure PKE schemes achieve forward security by constantly updating the secret keys. Our scheme is based on witness encryption by Garg et al. (STOC 2013) and a proof-of-stake blockchain with the distinguishable forking property introduced by Goyal et al. (TCC 2017), and ensures a ciphertext cannot be decrypted more than once, thereby rendering a compromised secret key useless with respect to decryption of past ciphertext the legitimate user has already decrypted. In this work, we formalize the notion of blockchain-based forward-secure PKE, show the feasibility of constructing a forward-secure PKE scheme without key update, and discuss interesting properties of our scheme such as post-compromise security.

  • Short Lattice Signature Scheme with Tighter Reduction under Ring-SIS Assumption

    Kaisei KAJITA  Go OHTAKE  Kazuto OGAWA  Koji NUIDA  Tsuyoshi TAKAGI  

     
    PAPER

      Pubricized:
    2022/09/08
      Page(s):
    228-240

    We propose a short signature scheme under the ring-SIS assumption in the standard model. Specifically, by revisiting an existing construction [Ducas and Micciancio, CRYPTO 2014], we demonstrate lattice-based signatures with improved reduction loss. As far as we know, there are no ways to use multiple tags in the signature simulation of security proof in the lattice tag-based signatures. We address the tag-collision possibility in the lattice setting, which improves reduction loss. Our scheme generates tags from messages by constructing a scheme under a mild security condition that is existentially unforgeable against random message attack with auxiliary information. Thus our scheme can reduce the signature size since it does not need to send tags with the signatures. Our scheme has short signature sizes of O(1) and achieves tighter reduction loss than that of Ducas et al.'s scheme. Our proposed scheme has two variants. Our scheme with one property has tighter reduction and the same verification key size of O(log n) as that of Ducas et al.'s scheme, where n is the security parameter. Our scheme with the other property achieves much tighter reduction loss of O(Q/n) and verification key size of O(n), where Q is the number of signing queries.

  • Multiparallel MMT: Faster ISD Algorithm Solving High-Dimensional Syndrome Decoding Problem

    Shintaro NARISADA  Kazuhide FUKUSHIMA  Shinsaku KIYOMOTO  

     
    PAPER

      Pubricized:
    2022/11/09
      Page(s):
    241-252

    The hardness of the syndrome decoding problem (SDP) is the primary evidence for the security of code-based cryptosystems, which are one of the finalists in a project to standardize post-quantum cryptography conducted by the U.S. National Institute of Standards and Technology (NIST-PQC). Information set decoding (ISD) is a general term for algorithms that solve SDP efficiently. In this paper, we conducted a concrete analysis of the time complexity of the latest ISD algorithms under the limitation of memory using the syndrome decoding estimator proposed by Esser et al. As a result, we present that theoretically nonoptimal ISDs, such as May-Meurer-Thomae (MMT) and May-Ozerov, have lower time complexity than other ISDs in some actual SDP instances. Based on these facts, we further studied the possibility of multiple parallelization for these ISDs and proposed the first GPU algorithm for MMT, the multiparallel MMT algorithm. In the experiments, we show that the multiparallel MMT algorithm is faster than existing ISD algorithms. In addition, we report the first successful attempts to solve the 510-, 530-, 540- and 550-dimensional SDP instances in the Decoding Challenge contest using the multiparallel MMT.

  • Security Evaluation of Initialization Phases and Round Functions of Rocca and AEGIS

    Nobuyuki TAKEUCHI  Kosei SAKAMOTO  Takanori ISOBE  

     
    PAPER

      Pubricized:
    2022/11/09
      Page(s):
    253-262

    Authenticated-Encryption with Associated-Data (AEAD) plays an important role in guaranteeing confidentiality, integrity, and authenticity in network communications. To meet the requirements of high-performance applications, several AEADs make use of AES New Instructions (AES-NI), which can conduct operations of AES encryption and decryption dramatically fast by hardware accelerations. At SAC 2013, Wu and Preneel proposed an AES-based AEAD scheme called AEGIS-128/128L/256, to achieve high-speed software implementation. At FSE 2016, Jean and Nikolić generalized the construction of AEGIS and proposed more efficient round functions. At ToSC 2021, Sakamoto et al. further improved the constructions of Jean and Nikolić, and proposed an AEAD scheme called Rocca for beyond 5G. In this study, we first evaluate the security of the initialization phases of Rocca and AEGIS family against differential and integral attacks using MILP (Mixed Integer Linear Programming) tools. Specifically, according to the evaluation based on the lower bounds for the number of active S-boxes, the initialization phases of AEGIS-128/128L/256 are secure against differential attacks after 4/3/6 rounds, respectively. Regarding integral attacks, we present the integral distinguisher on 6 rounds and 6/5/7 rounds in the initialization phases of Rocca and AEGIS-128/128L/256, respectively. Besides, we evaluate the round function of Rocca and those of Jean and Nikolić as cryptographic permutations against differential, impossible differential, and integral attacks. Our results indicate that, for differential attacks, the growth rate of increasing the number of active S-boxes in Rocca is faster than those of Jean and Nikolić. For impossible differential and integral attacks, we show that the round function of Rocca achieves the sufficient level of the security against these attacks in smaller number of rounds than those of Jean and Nikolić.

  • Linear Algebraic Approach to Strongly Secure Ramp Secret Sharing for General Access Structures with Application to Symmetric PIR

    Reo ERIGUCHI  Noboru KUNIHIRO  Koji NUIDA  

     
    PAPER

      Pubricized:
    2022/09/13
      Page(s):
    263-271

    Ramp secret sharing is a variant of secret sharing which can achieve better information ratio than perfect schemes by allowing some partial information on a secret to leak out. Strongly secure ramp schemes can control the amount of leaked information on the components of a secret. In this paper, we reduce the construction of strongly secure ramp secret sharing for general access structures to a linear algebraic problem. As a result, we show that previous results on strongly secure network coding imply two linear transformation methods to make a given linear ramp scheme strongly secure. They are explicit or provide a deterministic algorithm while the previous methods which work for any linear ramp scheme are non-constructive. In addition, we present a novel application of strongly secure ramp schemes to symmetric PIR in a multi-user setting. Our solution is advantageous over those based on a non-strongly secure scheme in that it reduces the amount of communication between users and servers and also the amount of correlated randomness that servers generate in the setup.

  • Perfectly Secure Oblivious Priority Queue

    Atsunori ICHIKAWA  Wakaha OGATA  

     
    PAPER

      Pubricized:
    2022/08/23
      Page(s):
    272-280

    An Oblivious Priority Queue (OPQ) is a cryptographic primitive that enables a client to outsource its data to a dishonest server, and also to securely manage the data according to a priority queue algorithm. Though the first OPQ achieves perfect security, it supports only two operations; Inserting an element and extracting the top-priority element, which are the minimal requirement for a priority queue. In addition, this OPQ allows an adversary to observe operations in progress, which leaks the exact number of elements in the data structure. On the other hand, there are many subsequent works for OPQs that implement additional operations of a priority queue, hide the running operations, and improve efficiency. Though the recent works realize optimal efficiency, all of them achieve only statistical or computational security. Aiming to reconcile perfect security of the first OPQ with all functions (including the operation hiding) supported by recent OPQs, we construct a novel perfectly secure OPQ that can simulate the following operations while hiding which one is in progress; Inserting an element, extracting the top-priority one, deleting an element, and modifying the priority of an element. The efficiency of our scheme is O(log2 N), which is larger than that of the best known statistically secure OPQ but is the same as the known perfectly secure scheme.

  • Exploring Effect of Residual Electric Charges on Cryptographic Circuits: Extended Version

    Mitsuru SHIOZAKI  Takeshi SUGAWARA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2022/09/15
      Page(s):
    281-293

    We study a new transistor-level side-channel leakage caused by charges trapped in between stacked transistors namely residual electric charges (RECs). Building leakage models is important in designing countermeasures against side-channel attacks (SCAs). The conventional work showed that even a transistor-level leakage is measurable with a local electromagnetic measurement. One example is the current-path leak [1], [2]: an attacker can distinguish the number of transistors in the current path activated during a signal transition. Addressing this issue, Sugawara et al. proposed to use a mirror circuit that has the same number of transistors on its possible current paths. We show that this countermeasure is insufficient by showing a new transistor-level leakage, caused by RECs, not covered in the previous work. RECs can carry the history of the gate's state over multiple clock cycles and changes the gate's electrical behavior. We experimentally verify that RECs cause exploitable side-channel leakage. We also propose a countermeasure against REC leaks and designed advanced encryption standard-128 (AES-128) circuits using IO-masked dual-rail read-only memory with a 180-nm complementary metal-oxide-semiconductor (CMOS) process. We compared the resilience of our AES-128 circuits against EMA attacks with and without our countermeasure and investigated an RECs' effect on physically unclonable functions (PUFs). We further extend RECs to physically unclonable function. We demonstrate that RECs affect the performance of arbiter and ring-oscillator PUFs through experiments using our custom chips fabricated with 180- and 40-nm CMOS processes*.

  • Profiling Deep Learning Side-Channel Attacks Using Multi-Label against AES Circuits with RSM Countermeasure

    Yuta FUKUDA  Kota YOSHIDA  Hisashi HASHIMOTO  Kunihiro KURODA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2022/09/08
      Page(s):
    294-305

    Deep learning side-channel attacks (DL-SCAs) have been actively studied in recent years. In the DL-SCAs, deep neural networks (DNNs) are trained to predict the internal states of the cryptographic operation from the side-channel information such as power traces. It is important to select suitable DNN output labels expressing an internal states for successful DL-SCAs. We focus on the multi-label method proposed by Zhang et al. for the hardware-implemented advanced encryption standard (AES). They used the power traces supplied from the AES-HD public dataset, and reported to reveal a single key byte on conditions in which the target key was the same as the key used for DNN training (profiling key). In this paper, we discuss an improvement for revealing all the 16 key bytes in practical conditions in which the target key is different from the profiling key. We prepare hardware-implemented AES without SCA countermeasures on ASIC for the experimental environment. First, our experimental results show that the DNN using multi-label does not learn side-channel leakage sufficiently from the power traces acquired with only one key. Second, we report that DNN using multi-label learns the most of side-channel leakage by using three kinds of profiling keys, and all the 16 target key bytes are successfully revealed even if the target key is different from the profiling keys. Finally, we applied the proposed method, DL-SCA using multi-label and three profiling keys against hardware-implemented AES with rotating S-boxes masking (RSM) countermeasures. The experimental result shows that all the 16 key bytes are successfully revealed by using only 2,000 attack traces. We also studied the reasons for the high performance of the proposed method against RSM countermeasures and found that the information from the weak bits is effectively exploited.

  • Automorphism Shuffles for Graphs and Hypergraphs and Its Applications

    Kazumasa SHINAGAWA  Kengo MIYAMOTO  

     
    PAPER

      Pubricized:
    2022/09/12
      Page(s):
    306-314

    In card-based cryptography, a deck of physical cards is used to achieve secure computation. A shuffle, which randomly permutes a card-sequence along with some probability distribution, ensures the security of a card-based protocol. The authors proposed a new class of shuffles called graph shuffles, which randomly permutes a card-sequence by an automorphism of a directed graph (New Generation Computing 2022). For a directed graph G with n vertices and m edges, such a shuffle could be implemented with pile-scramble shuffles with 2(n + m) cards. In this paper, we study graph shuffles and give an implementation, an application, and a slight generalization. First, we propose a new protocol for graph shuffles with 2n + m cards. Second, as a new application of graph shuffles, we show that any cyclic group shuffle, which is a shuffle over a cyclic group, is a graph shuffle associated with some graph. Third, we define a hypergraph shuffle, which is a shuffle by an automorphism of a hypergraph, and show that any hypergraph shuffle can also be implemented with pile-scramble shuffles.

  • A Computationally Efficient Card-Based Majority Voting Protocol with Fewer Cards in the Private Model

    Yoshiki ABE  Takeshi NAKAI  Yohei WATANABE  Mitsugu IWAMOTO  Kazuo OHTA  

     
    PAPER

      Pubricized:
    2022/10/20
      Page(s):
    315-324

    Card-based cryptography realizes secure multiparty computation using physical cards. In 2018, Watanabe et al. proposed a card-based three-input majority voting protocol using three cards. In a card-based cryptographic protocol with n-bit inputs, it is known that a protocol using shuffles requires at least 2n cards. In contrast, as Watanabe et al.'s protocol, a protocol using private permutations can be constructed with fewer cards than the lower bounds above. Moreover, an n-input protocol using private permutations would not even require n cards in principle since a private permutation depending on an input can represent the input without using additional cards. However, there are only a few protocols with fewer than n cards. Recently, Abe et al. extended Watanabe et al.'s protocol and proposed an n-input majority voting protocol with n cards and n + ⌊n/2⌋ + 1 private permutations. This paper proposes an n-input majority voting protocol with ⌈n/2⌉ + 1 cards and 2n-1 private permutations, which is also obtained by extending Watanabe et al.'s protocol. Compared with Abe et al.'s protocol, although the number of private permutations increases by about n/2, the number of cards is reduced by about n/2. In addition, unlike Abe et al.'s protocol, our protocol includes Watanabe et al.'s protocol as a special case where n=3.

  • Secure Revocation Features in eKYC - Privacy Protection in Central Bank Digital Currency

    Kazuo TAKARAGI  Takashi KUBOTA  Sven WOHLGEMUTH  Katsuyuki UMEZAWA  Hiroki KOYANAGI  

     
    PAPER

      Pubricized:
    2022/10/07
      Page(s):
    325-332

    Central bank digital currencies require the implementation of eKYC to verify whether a trading customer is eligible online. When an organization issues an ID proof of a customer for eKYC, that proof is usually achieved in practice by a hierarchy of issuers. However, the customer wants to disclose only part of the issuer's chain and documents to the trading partner due to privacy concerns. In this research, delegatable anonymous credential (DAC) and zero-knowledge range proof (ZKRP) allow customers to arbitrarily change parts of the delegation chain and message body to range proofs expressed in inequalities. That way, customers can protect the privacy they need with their own control. Zero-knowledge proof is applied to prove the inequality between two time stamps by the time stamp server (signature presentation, public key revocation, or non-revocation) without disclosing the signature content and stamped time. It makes it possible to prove that the registration information of the national ID card is valid or invalid while keeping the user's personal information anonymous. This research aims to contribute to the realization of a sustainable financial system based on self-sovereign identity management with privacy-enhanced PKI.

  • mPoW: How to Make Proof of Work Meaningful

    Takaki ASANUMA  Takanori ISOBE  

     
    PAPER

      Pubricized:
    2022/11/09
      Page(s):
    333-340

    Proof of Work (PoW), which is a consensus algorithm for blockchain, entails a large number of meaningless hash calculations and wastage of electric power and computational resources. In 2021, it is estimated that the PoW of Bitcoin consumes as much electricity as Pakistan's annual power consumption (91TWh). This is a serious problem against sustainable development goals. To solve this problem, this study proposes Meaningful-PoW (mPoW), which involves a meaningful calculation, namely the application of a genetic algorithm (GA) to PoW. Specifically, by using the intermediate values that are periodically generated through GA calculations as an input to the Hashcash used in Bitcoin, it is possible to make this scheme a meaningful calculation (GA optimization problem) while maintaining the properties required for PoW. Furthermore, by applying a device-binding technology, mPoW can be ASIC resistant without the requirement of a large memory. Thus, we show that mPoW can reduce the excessive consumption of both power and computational resources.

  • A Study of The Risk Quantification Method of Cyber-Physical Systems focusing on Direct-Access Attacks to In-Vehicle Networks

    Yasuyuki KAWANISHI  Hideaki NISHIHARA  Hideki YAMAMOTO  Hirotaka YOSHIDA  Hiroyuki INOUE  

     
    PAPER

      Pubricized:
    2022/11/09
      Page(s):
    341-349

    Cyber-physical systems, in which ICT systems and field devices are interconnected and interlocked, have become widespread. More threats need to be taken into consideration when designing the security of cyber-physical systems. Attackers may cause damage to the physical world by attacks which exploit vulnerabilities of ICT systems, while other attackers may use the weaknesses of physical boundaries to exploit ICT systems. Therefore, it is necessary to assess such risks of attacks properly. A direct-access attack in the field of automobiles is the latter type of attacks where an attacker connects unauthorized equipment to an in-vehicle network directly and attempts unauthorized access. But it has been considered as less realistic and evaluated less risky than other threats via network entry points by conventional risk assessment methods. We focused on reassessing threats via direct access attacks in proposing effective security design procedures for cyber-physical systems based on a guideline for automobiles, JASO TP15002. In this paper, we focus on “fitting to a specific area or viewpoint” of such a cyber-physical system, and devise a new risk quantification method, RSS-CWSS_CPS based on CWSS, which is also a vulnerability evaluation standard for ICT systems. It can quantify the characteristics of the physical boundaries in cyber-physical systems.

  • On the Limitations of Computational Fuzzy Extractors

    Kenji YASUNAGA  Kosuke YUZAWA  

     
    LETTER

      Pubricized:
    2022/08/10
      Page(s):
    350-354

    We present a negative result of fuzzy extractors with computational security. Specifically, we show that, under a computational condition, a computational fuzzy extractor implies the existence of an information-theoretic fuzzy extractor with slightly weaker parameters. Our result implies that to circumvent the limitations of information-theoretic fuzzy extractors, we need to employ computational fuzzy extractors that are not invertible by non-lossy functions.

  • Packer Identification Method for Multi-Layer Executables Using Entropy Analysis with k-Nearest Neighbor Algorithm

    Ryoto OMACHI  Yasuyuki MURAKAMI  

     
    LETTER

      Pubricized:
    2022/08/16
      Page(s):
    355-357

    The damage cost caused by malware has been increasing in the world. Usually, malwares are packed so that it is not detected. It is a hard task even for professional malware analysts to identify the packers especially when the malwares are multi-layer packed. In this letter, we propose a method to identify the packers for multi-layer packed malwares by using k-nearest neighbor algorithm with entropy-analysis for the malwares.

  • Special Section on Information Theory and Its Applications
  • FOREWORD Open Access

    Hiroki KOGA  

     
    FOREWORD

      Page(s):
    358-358
  • Proximal Decoding for LDPC Codes

    Tadashi WADAYAMA  Satoshi TAKABE  

     
    PAPER-Coding Theory and Techniques

      Pubricized:
    2022/09/01
      Page(s):
    359-367

    This paper presents a novel optimization-based decoding algorithm for LDPC codes. The proposed decoding algorithm is based on a proximal gradient method for solving an approximate maximum a posteriori (MAP) decoding problem. The key idea of the proposed algorithm is the use of a code-constraint polynomial to penalize a vector far from a codeword as a regularizer in the approximate MAP objective function. A code proximal operator is naturally derived from a code-constraint polynomial. The proposed algorithm, called proximal decoding, can be described by a simple recursive formula consisting of the gradient descent step for a negative log-likelihood function corresponding to the channel conditional probability density function and the code proximal operation regarding the code-constraint polynomial. Proximal decoding is experimentally shown to be applicable to several non-trivial channel models such as LDPC-coded massive MIMO channels, correlated Gaussian noise channels, and nonlinear vector channels. In particular, in MIMO channels, proximal decoding outperforms known massive MIMO detection algorithms, such as an MMSE detector with belief propagation decoding. The simple optimization-based formulation of proximal decoding allows a way for developing novel signal processing algorithms involving LDPC codes.

  • Multi Deletion/Substitution/Erasure Error-Correcting Codes for Information in Array Design

    Manabu HAGIWARA  

     
    PAPER-Coding Theory and Techniques

      Pubricized:
    2022/09/21
      Page(s):
    368-374

    This paper considers error-correction for information in array design, i.e., two-dimensional design such as QR-codes. The error model is multi deletion/substitution/erasure errors. Code construction for the errors and an application of the code are provided. The decoding technique uses an error-locator for deletion codes.

  • Construction of High-Rate Convolutional Codes Using Dual Codes

    Sen MORIYA  Hiroshi SASANO  

     
    PAPER-Coding Theory and Techniques

      Pubricized:
    2022/08/23
      Page(s):
    375-381

    In this study, we consider techniques for searching high-rate convolutional code (CC) encoders using dual code encoders. A low-rate (R = 1/n) CC is a dual code to a high-rate (R = (n - 1)/n) CC. According to our past studies, if a CC encoder has a high performance, a dual code encoder to the CC also tends to have a good performance. However, it is not guaranteed to have the highest performance. We consider a method to obtain a high-rate CC encoder with a high performance using good dual code encoders, namely, high-performance low-rate CC encoders. We also present some CC encoders obtained by searches using our method.

  • Biometric Identification Systems with Both Chosen and Generated Secret Keys by Allowing Correlation

    Vamoua YACHONGKA  Hideki YAGI  

     
    PAPER-Shannon Theory

      Pubricized:
    2022/09/06
      Page(s):
    382-393

    We propose a biometric identification system where the chosen- and generated-secret keys are used simultaneously, and investigate its fundamental limits from information theoretic perspectives. The system consists of two phases: enrollment and identification phases. In the enrollment phase, for each user, the encoder uses a secret key, which is chosen independently, and the biometric identifier to generate another secret key and a helper data. In the identification phase, observing the biometric sequence of the identified user, the decoder estimates index, chosen- and generated-secret keys of the identified user based on the helper data stored in the system database. In this study, the capacity region of such system is characterized. In the problem settings, we allow chosen- and generated-secret keys to be correlated. As a result, by permitting the correlation of the two secret keys, the sum rate of the identification, chosen- and generated-secret key rates can achieve a larger value compared to the case where the keys do not correlate. Moreover, the minimum amount of the storage rate changes in accordance with both the identification and chosen-secret key rates, but that of the privacy-leakage rate depends only on the identification rate.

  • Multi-Designated Receiver Authentication Codes: Models and Constructions

    Yohei WATANABE  Takenobu SEITO  Junji SHIKATA  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2022/09/30
      Page(s):
    394-405

    An authentication code (A-code) is a two-party message authentication code in the information-theoretic security setting. One of the variants of A-codes is a multi-receiver authentication code (MRA-code), where there are a single sender and multiple receivers and the sender can create a single authenticator so that all receivers accepts it unless it is maliciously modified. In this paper, we introduce a multi-designated receiver authentication code (MDRA-code) with information-theoretic security as an extension of MRA-codes. The purpose of MDRA-codes is to securely transmit a message via a broadcast channel from a single sender to an arbitrary subset of multiple receivers that have been designated by the sender, and only the receivers in the subset (i.e., not all receivers) should accept the message if an adversary is absent. This paper proposes a model and security formalization of MDRA-codes, and provides constructions of MDRA-codes.

  • Information Leakage Through Passive Timing Attacks on RSA Decryption System

    Tomonori HIRATA  Yuichi KAJI  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2022/08/16
      Page(s):
    406-413

    A side channel attack is a means of security attacks that tries to restore secret information by analyzing side-information such as electromagnetic wave, heat, electric energy and running time that are unintentionally emitted from a computer system. The side channel attack that focuses on the running time of a cryptosystem is specifically named a “timing attack”. Timing attacks are relatively easy to carry out, and particularly threatening for tiny systems that are used in smart cards and IoT devices because the system is so simple that the processing time would be clearly observed from the outside of the card/device. The threat of timing attacks is especially serious when an attacker actively controls the input to a target program. Countermeasures are studied to deter such active attacks, but the attacker still has the chance to learn something about the concealed information by passively watching the running time of the target program. The risk of passive timing attacks can be measured by the mutual information between the concealed information and the running time. However, the computation of the mutual information is hardly possible except for toy examples. This study focuses on three algorithms for RSA decryption, derives formulas of the mutual information under several assumptions and approximations, and calculates the mutual information numerically for practical security parameters.

  • Deep Learning of Damped AMP Decoding Networks for Sparse Superposition Codes via Annealing

    Toshihiro YOSHIDA  Keigo TAKEUCHI  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2022/07/22
      Page(s):
    414-421

    This paper addresses short-length sparse superposition codes (SSCs) over the additive white Gaussian noise channel. Damped approximate message-passing (AMP) is used to decode short SSCs with zero-mean independent and identically distributed Gaussian dictionaries. To design damping factors in AMP via deep learning, this paper constructs deep-unfolded damped AMP decoding networks. An annealing method for deep learning is proposed for designing nearly optimal damping factors with high probability. In annealing, damping factors are first optimized via deep learning in the low signal-to-noise ratio (SNR) regime. Then, the obtained damping factors are set to the initial values in stochastic gradient descent, which optimizes damping factors for slightly larger SNR. Repeating this annealing process designs damping factors in the high SNR regime. Numerical simulations show that annealing mitigates fluctuation in learned damping factors and outperforms exhaustive search based on an iteration-independent damping factor.

  • Asymptotic Evaluation of Classification in the Presence of Label Noise

    Goki YASUDA  Tota SUKO  Manabu KOBAYASHI  Toshiyasu MATSUSHIMA  

     
    PAPER-Learning

      Pubricized:
    2022/08/26
      Page(s):
    422-430

    In a practical classification problem, there are cases where incorrect labels are included in training data due to label noise. We introduce a classification method in the presence of label noise that idealizes a classification method based on the expectation-maximization (EM) algorithm, and evaluate its performance theoretically. Its performance is asymptotically evaluated by assessing the risk function defined as the Kullback-Leibler divergence between predictive distribution and true distribution. The result of this performance evaluation enables a theoretical evaluation of the most successful performance that the EM-based classification method may achieve.

  • Enumeration of Both-Ends-Fixed k-Ary Necklaces and Its Applications

    Hiroshi FUJISAKI  

     
    PAPER-Fundamentals of Information Theory

      Pubricized:
    2022/08/23
      Page(s):
    431-439

    We consider both-ends-fixed k-ary necklaces and enumerate all such necklaces of length n from the viewpoints of symbolic dynamics and β-expansions, where n and k(≥ 2) are natural numbers and β(> 1) is a real number. Recently, Sawada et al. proposed an efficient construction of k-ary de Bruijn sequence of length kn, which for each n ≥ 1, requires O(n) space but generates a single k-ary de Bruijn sequence of length kn in O(1)-amortized time per bit. Based on the enumeration of both-ends-fixed k-ary necklaces of length n, we evaluate auto-correlation values of the k-ary de Bruijn sequences of length kn constructed by Sawada et al. We also estimate the asymptotic behaviour of the obtained auto-correlation values as n tends to infinity.

  • Combinatorial Structures Behind Binary Generalized NTU Sequences

    Xiao-Nan LU  

     
    LETTER-Cryptography and Information Security

      Pubricized:
    2022/06/15
      Page(s):
    440-444

    This paper concentrates on a class of pseudorandom sequences generated by combining q-ary m-sequences and quadratic characters over a finite field of odd order, called binary generalized NTU sequences. It is shown that the relationship among the sub-sequences of binary generalized NTU sequences can be formulated as combinatorial structures called Hadamard designs. As a consequence, the combinatorial structures generalize the group structure discovered by Kodera et al. (IEICE Trans. Fundamentals, vol.E102-A, no.12, pp.1659-1667, 2019) and lead to a finite-geometric explanation for the investigated group structure.

  • Orthogonal Variable Spreading Factor Codes Suppressing Signal-Envelope Fluctuation

    Tomoko K. MATSUSHIMA  Shoichiro YAMASAKI  Hirokazu TANAKA  

     
    LETTER-Spread Spectrum Technologies and Applications

      Pubricized:
    2022/08/08
      Page(s):
    445-449

    Recently, complex orthogonal variable spreading factor (OVSF) codes based on polyphase orthogonal codes have been proposed to support multi-user/multi-rate data transmission services in synchronous direct-sequence code-division multiple access (DS-CDMA) systems. This study investigates the low signal-envelope fluctuation property of the complex OVSF codes in terms of transmission signal trajectories. In addition, a new method is proposed to suppress the envelope fluctuation more strongly at the expense of reducing the number of spreading sequences of the codes.

  • New Construction of Z-Optimal Type-II Even-Length Quadriphase Z-Complementary Pairs

    Fanxin ZENG  Xiping HE  Zhenyu ZHANG  Li YAN  

     
    LETTER-Sequences

      Pubricized:
    2022/08/23
      Page(s):
    450-453

    Type-II Z-complementary pairs (ZCPs) play an important role in suppressing asynchronous interference in a wideband wireless communication system where the minimum interfering-signal delay is large. Based on binary Golay complementary pairs (BGCPs) and interleaving technique, new construction for producing Z-optimal Type-II even-length quadriphase ZCPs (EL-QZCPs) is presented, and the resultant pairs have new lengths in the form of 2 × 2α10β26γ (α, β, γ non-negative integers), which are not included in existing known Type-II EL-QZCPs.

  • Special Section on Smart Multimedia & Communication Systems
  • FOREWORD Open Access

    Shingo YOSHIZAWA  Hiroshi TSUTSUI  

     
    FOREWORD

      Page(s):
    454-455
  • Joint Transmission Null Beamforming for MIMO Full-Duplex Wireless Communication System

    Kotaro NAGANO  Masahiro KAWANO  Yuhei NAGAO  Hiroshi OCHI  

     
    PAPER

      Pubricized:
    2022/09/15
      Page(s):
    456-463

    Cancellation of self interference (SI) is an important technology in order for wireless communication system devices to perform full-duplex communication. In this paper, we propose a novel self-interference cancellation using null beamforming to be applied entire IEEE 802.11 frame including the legacy part for full-duplex wireless communication on Cooperative MIMO (Multiple Input Multiple Output). We evaluate the SI cancellation amount by the proposed method using a field programmable gate array (FPGA) and software defined radio (SDR), and show the experimental results. In the experiment, it is confirmed that the amount of SI cancellation by the proposed method was at least 18dB. The SI cancellation amount can be further potentiated with more accurate CSI (channel state information) by increasing the transmission power. It is shown that SI can be suppressed whole frame which includes legacy preamble part. The proposed method can be applied to next generation wireless communication standards as well.

  • Accurate Phase Angle Measurement of Backscatter Signal under Noisy Environment

    Tomoya IWASAKI  Osamu TOKUMASU  Jin MITSUGI  

     
    PAPER

      Pubricized:
    2022/09/15
      Page(s):
    464-470

    Backscatter communication is an emerging wireless access technology to realize ultra-low power terminals exploiting the modulated reflection of incident radio wave. This paper proposes a method to measure the phase angle of backscatter link using principal component analysis (PCA). The phase angle measurement of backscatter link at the receiver is essential to maximize the signal quality for subsequent demodulation and to measure the distance and the angle of arrival. The drawback of popular phase angle measurement with naive phase averaging and linear regression analysis is to produce erroneous phase angle, where the phase angle is close to $pm rac{pi}{2}$ radian and the signal quality is poor. The advantage of the proposal is quantified with a computer simulation, a conducted experiment and radio propagation experiments.

  • Brightness Preserving Generalized Histogram Equalization with High Contrast Enhancement Ability

    Hideaki TANAKA  Akira TAGUCHI  

     
    PAPER

      Pubricized:
    2022/10/11
      Page(s):
    471-480

    Histogram equalization (HE) is the one of the simplest and most effective methods for contrast enhancement. It can automatically define the gray-level mapping function based on the distribution of gray-level included in the image. However, since HE does not use a spatial feature included in the input image, HE fails to produce satisfactory results for broad range of low-contrast images. The differential gray-level histogram (DH), which is contained edge information of the input image, was defined and the differential gray-level histogram equalization (DHE) has been proposed. The DHE shows better enhancement results compared to HE for many kinds of images. In this paper, we propose a generalized histogram equalization (GHE) including HE and DHE. In GHE, the histogram is created using the power of the differential gray-level, which includes the spatial features of the image. In HE, the mean brightness of the enhancement image cannot be controlled. On the other hand, GHE can control the mean brightness of the enhancement image by changing the power, thus, the mean brightness of the input image can be perfectly preserved while maintaining good contrast enhancement.

  • New Performance Evaluation Method for Data Embedding Techniques for Printed Images Using Mobile Devices Based on a GAN

    Masahiro YASUDA  Soh YOSHIDA  Mitsuji MUNEYASU  

     
    LETTER

      Pubricized:
    2022/08/23
      Page(s):
    481-485

    Methods that embed data into printed images and retrieve data from printed images captured using the camera of a mobile device have been proposed. Evaluating these methods requires printing and capturing actual embedded images, which is burdensome. In this paper, we propose a method for reducing the workload for evaluating the performance of data embedding algorithms by simulating the degradation caused by printing and capturing images using generative adversarial networks. The proposed method can represent various captured conditions. Experimental results demonstrate that the proposed method achieves the same accuracy as detecting embedded data under actual conditions.

  • Analytical Minimization of L2-Sensitivity for All-Pass Fractional Delay Digital Filters with Normalized Lattice Structure

    Shunsuke KOSHITA  

     
    LETTER

      Pubricized:
    2022/08/24
      Page(s):
    486-489

    This letter theoretically analyzes and minimizes the L2-sensitivity for all-pass fractional delay digital filters of which structure is given by the normalized lattice structure. The L2-sensitivity is well known as one of the useful evaluation functions for measuring the performance degradation caused by quantizing filter coefficients into finite number of bits. This letter deals with two cases: L2-sensitivity minimization problem with scaling constraint, and the one without scaling constraint. It is proved that, in both of these two cases, any all-pass fractional delay digital filter with the normalized lattice structure becomes an optimal structure that analytically minimizes the L2-sensitivity.

  • Special Section on VLSI Design and CAD Algorithms
  • FOREWORD Open Access

    Toshihiro HATTORI  

     
    FOREWORD

      Page(s):
    490-490
  • Heterogeneous Integration of Precise and Approximate Storage for Error-Tolerant Workloads

    Chihiro MATSUI  Ken TAKEUCHI  

     
    PAPER

      Pubricized:
    2022/09/05
      Page(s):
    491-503

    This study proposes a heterogeneous integration of precise and approximate storage in data center storage. The storage control engine allocates precise and error-tolerant applications to precise and approximate storage, respectively. The appropriate use of both precise and approximate storage is examined by applying a non-volatile memory capacity algorithm. To respond to the changes in application over time, the non-volatile memory capacity algorithm changes capacity of storage class memories (SCMs), namely the memory-type SCM (M-SCM) and storage-type SCM (S-SCM), in non-volatile memory resource. A three-dimensional triple-level cell (TLC) NAND flash is used as a large capacity memory. The results indicate that precise storage exhibits a high performance when the maximum storage cost is high. By contrast, with a low maximum storage cost, approximate storage exhibits high performance using a low bit cost approximate multiple-level cell (MLC) S-SCM.

  • Real-Time Image-Based Vibration Extraction with Memory-Efficient Optical Flow and Block-Based Adaptive Filter

    Taito MANABE  Yuichiro SHIBATA  

     
    PAPER

      Pubricized:
    2022/09/05
      Page(s):
    504-513

    In this paper, we propose a real-time vibration extraction system, which extracts vibration component within a given frequency range from videos in real time, for realizing tremor suppression used in microsurgery assistance systems. To overcome the problems in our previous system based on the mean Lucas-Kanade (LK) optical flow of the whole frame, we have introduced a new architecture combining dense optical flow calculated with simple feature matching and block-based band-pass filtering using band-limited multiple Fourier linear combiner (BMFLC). As a feature of optical flow calculation, we use the simplified rotation-invariant histogram of oriented gradients (RIHOG) based on a gradient angle quantized to 1, 2, or 3 bits, which greatly reduces the usage of memory resources for a frame buffer. An obtained optical flow map is then divided into multiple blocks, and BMFLC is applied to the mean optical flow of each block independently. By using the L1-norm of adaptive weight vectors in BMFLC as a criterion, blocks belonging to vibrating objects can be isolated from background at low cost, leading to better extraction accuracy compared to the previous system. The whole system for 480p and 720p resolutions can be implemented on a single Xilinx Zynq-7000 XC7Z020 FPGA without any external memory, and can process a video stream supplied directly from a camera at 60fps.

  • Dynamic Verification Framework of Approximate Computing Circuits using Quality-Aware Coverage-Based Grey-Box Fuzzing

    Yutaka MASUDA  Yusei HONDA  Tohru ISHIHARA  

     
    PAPER

      Pubricized:
    2022/09/02
      Page(s):
    514-522

    Approximate computing (AC) has recently emerged as a promising approach to the energy-efficient design of digital systems. For realizing the practical AC design, we need to verify whether the designed circuit can operate correctly under various operating conditions. Namely, the verification needs to efficiently find fatal logic errors or timing errors that violate the constraint of computational quality. This work focuses on the verification where the computational results can be observed, the computational quality can be calculated from computational results, and the constraint of computational quality is given and defined as the constraint which is set to the computational quality of designed AC circuit with given workloads. Then, this paper proposes a novel dynamic verification framework of the AC circuit. The key idea of the proposed framework is to incorporate a quality assessment capability into the Coverage-based Grey-box Fuzzing (CGF). CGF is one of the most promising techniques in the research field of software security testing. By repeating (1) mutation of test patterns, (2) execution of the program under test (PUT), and (3) aggregation of coverage information and feedback to the next test pattern generation, CGF can explore the verification space quickly and automatically. On the other hand, CGF originally cannot consider the computational quality by itself. For overcoming this quality unawareness in CGF, the proposed framework additionally embeds the Design Under Verification (DUV) component into the calculation part of computational quality. Thanks to the DUV integration, the proposed framework realizes the quality-aware feedback loop in CGF and thus quickly enhances the verification coverage for test patterns that violate the quality constraint. In this work, we quantitatively compared the verification coverage of the approximate arithmetic circuits between the proposed framework and the random test. In a case study of an approximate multiply-accumulate (MAC) unit, we experimentally confirmed that the proposed framework achieved 3.85 to 10.36 times higher coverage than the random test.

  • Vulnerability Estimation of DNN Model Parameters with Few Fault Injections

    Yangchao ZHANG  Hiroaki ITSUJI  Takumi UEZONO  Tadanobu TOBA  Masanori HASHIMOTO  

     
    PAPER

      Pubricized:
    2022/11/09
      Page(s):
    523-531

    The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.

  • An Accuracy Reconfigurable Vector Accelerator based on Approximate Logarithmic Multipliers for Energy-Efficient Computing

    Lingxiao HOU  Yutaka MASUDA  Tohru ISHIHARA  

     
    PAPER

      Pubricized:
    2022/09/02
      Page(s):
    532-541

    The approximate logarithmic multiplier proposed by Mitchell provides an efficient alternative for processing dense multiplication or multiply-accumulate operations in applications such as image processing and real-time robotics. It offers the advantages of small area, high energy efficiency and is suitable for applications that do not necessarily achieve high accuracy. However, its maximum error of 11.1% makes it challenging to deploy in applications requiring relatively high accuracy. This paper proposes a novel operand decomposition method (OD) that decomposes one multiplication into the sum of multiple approximate logarithmic multiplications to widely reduce Mitchell multiplier errors while taking full advantage of its area savings. Based on the proposed OD method, this paper also proposes an accuracy reconfigurable multiply-accumulate (MAC) unit that provides multiple reconfigurable accuracies with high parallelism. Compared to a MAC unit consisting of accurate multipliers, the area is significantly reduced to less than half, improving the hardware parallelism while satisfying the required accuracy for various scenarios. The experimental results show the excellent applicability of our proposed MAC unit in image smoothing and robot localization and mapping application. We have also designed a prototype processor that integrates the minimum functionality of this MAC unit as a vector accelerator and have implemented a software-level accuracy reconfiguration in the form of an instruction set extension. We experimentally confirmed the correct operation of the proposed vector accelerator, which provides the different degrees of accuracy and parallelism at the software level.

  • Approximation-Based System Implementation for Real-Time Minimum Energy Point Tracking over a Wide Operating Performance Region

    Shoya SONODA  Jun SHIOMI  Hidetoshi ONODERA  

     
    PAPER

      Pubricized:
    2022/10/07
      Page(s):
    542-550

    This paper refers to the optimal voltage pair, which minimizes the energy consumption of LSI circuits under a target delay constraint, as a Minimum Energy Point (MEP). This paper proposes an approximation-based implementation method for an MEP tracking system over a wide voltage region. This paper focuses on the MEP characteristics that the energy loss is sufficiently small even though the voltage point changes near the MEP. For example, the energy loss is less than 5% even though the estimated MEP differs by a few tens of millivolts in comparison with the actual MEP. Therefore, the complexity for determining the MEP is relaxed by approximating complex operations such as the logarithmic or the exponential functions in the MEP tracking algorithm, which leads to hardware-/software-efficient implementation. When the MEP tracking algorithm is implemented in software, the MEP estimation time is reduced from 1ms to 13µs by the proposed approximation. When implemented in hardware, the proposed method can reduce the area of an MEP estimation circuit to a quarter. Measurement results of a 32-bit RISC-V processor fabricated in a 65-nm SOTB process technology show that the energy loss introduced by the proposed approximation is less than 2% in comparison with the MEP operation. Furthermore, we show that the MEP can be tracked within about 45 microseconds by the proposed MEP tracking system.

  • Libretto: An Open Cell Timing Characterizer for Open Source VLSI Design

    Shinichi NISHIZAWA  Toru NAKURA  

     
    PAPER

      Pubricized:
    2022/09/13
      Page(s):
    551-559

    We propose an open source cell library characterizer. Recently, free and open-sourced silicon design communities are attracted by hobby designers, academies and industries. These open-sourced silicon designs are supported by free and open sourced EDAs, however, in our knowledge, tool-chain lacks cell library characterizer to use original standard cells into digital circuit design. This paper proposes an open source cell library characterizer which can generate timing models and power models of standard cell library.

  • An eFPGA Generation Suite with Customizable Architecture and IDE

    Morihiro KUGA  Qian ZHAO  Yuya NAKAZATO  Motoki AMAGASAKI  Masahiro IIDA  

     
    PAPER

      Pubricized:
    2022/10/07
      Page(s):
    560-574

    From edge devices to cloud servers, providing optimized hardware acceleration for specific applications has become a key approach to improve the efficiency of computer systems. Traditionally, many systems employ commercial field-programmable gate arrays (FPGAs) to implement dedicated hardware accelerator as the CPU's co-processor. However, commercial FPGAs are designed in generic architectures and are provided in the form of discrete chips, which makes it difficult to meet increasingly diversified market needs, such as balancing reconfigurable hardware resources for a specific application, or to be integrated into a customer's system-on-a-chip (SoC) in the form of embedded FPGA (eFPGA). In this paper, we propose an eFPGA generation suite with customizable architecture and integrated development environment (IDE), which covers the entire eFPGA design generation, testing, and utilization stages. For the eFPGA design generation, our intellectual property (IP) generation flow can explore the optimal logic cell, routing, and array structures for given target applications. For the testability, we employ a previously proposed shipping test method that is 100% accurate at detecting all stuck-at faults in the entire FPGA-IP. In addition, we propose a user-friendly and customizable Web-based IDE framework for the generated eFPGA based on the NODE-RED development framework. In the case study, we show an eFPGA architecture exploration example for a differential privacy encryption application using the proposed suite. Then we show the implementation and evaluation of the eFPGA prototype with a 55nm test element group chip design.

  • Regular Section
  • A New Subsample Time Delay Estimation Algorithm for LFM-Based Detection

    Cui YANG  Yalu XU  Yue YU  Gengxin NING  Xiaowu ZHU  

     
    PAPER-Ultrasonics

      Pubricized:
    2022/09/09
      Page(s):
    575-581

    This paper investigated a Subsample Time delay Estimation (STE) algorithm based on the amplitude of cross-correlation function to improve the estimation accuracy. In this paper, a rough time delay estimation is applied based on traditional cross correlator, and a fine estimation is achieved by approximating the sampled cross-correlation sequence to the amplitude of the theoretical cross-correlation function for linear frequency modulation (LFM) signal. Simulation results show that the proposed algorithm outperforms existing methods and can effectively improve time delay estimation accuracy with the complexity comparable to the traditional cross-correlation method. The theoretical Cramér-Rao Bound (CRB) is derived, and simulations demonstrate that the performance of STE can approach the boundary. Eventually, four important parameters discussed in the simulation to explore the impact on Mean Squared Error (MSE).

  • A State-Space Approach and Its Estimation Bias Analysis for Adaptive Notch Digital Filters with Constrained Poles and Zeros

    Yoichi HINAMOTO  Shotaro NISHIMURA  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2022/09/16
      Page(s):
    582-589

    This paper deals with a state-space approach for adaptive second-order IIR notch digital filters with constrained poles and zeros. A simplified iterative algorithm is derived from the gradient-descent method to minimize the mean-squared output of an adaptive notch digital filter. Then, stability and parameter-estimation bias are analyzed for the simplified iterative algorithm. A numerical example is presented to demonstrate the validity and effectiveness of the proposed adaptive state-space notch digital filter and parameter-estimation bias analysis.

  • A CFAR Detection Algorithm Based on Clutter Knowledge for Cognitive Radar

    Kaixuan LIU  Yue LI  Peng WANG  Xiaoyan PENG  Hongshu LIAO  Wanchun LI  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2022/09/13
      Page(s):
    590-599

    Under the background of non-homogenous and dynamic time-varying clutter, the processing ability of the traditional constant false alarm rate (CFAR) detection algorithm is significantly reduced, as well as the detection performance. This paper proposes a CFAR detection algorithm based on clutter knowledge (CK-CFAR), as a new CFAR, to improve the detection performance adaptability of the radar in complex clutter background. With the acquired clutter prior knowledge, the algorithm can dynamically select parameters according to the change of background clutter and calculate the threshold. Compared with the detection algorithms such as CA-CFAR, GO-CFAR, SO-CFAR, and OS-CFAR, the simulation results show that CK-CFAR has excellent detection performance in the background of homogenous clutter and edge clutter. This algorithm can help radar adapt to the clutter with different distribution characteristics, effectively enhance radar detection in a complex environment. It is more in line with the development direction of the cognitive radar.

  • On the Number of Affine Equivalence Classes of Vectorial Boolean Functions and q-Ary Functions

    Shihao LU  Haibin KAN  Jie PENG  Chenmiao SHI  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2022/08/24
      Page(s):
    600-605

    Vectorial Boolean functions play an important role in cryptography, sequences and coding theory. Both affine equivalence and EA-equivalence are well known equivalence relations between vectorial Boolean functions. In this paper, we give an exact formula for the number of affine equivalence classes, and an asymptotic formula for the number of EA-equivalence classes of vectorial Boolean functions.

  • Lookahead Search-Based Low-Complexity Multi-Type Tree Pruning Method for Versatile Video Coding (VVC) Intra Coding

    Qi TENG  Guowei TENG  Xiang LI  Ran MA  Ping AN  Zhenglong YANG  

     
    PAPER-Coding Theory

      Pubricized:
    2022/08/24
      Page(s):
    606-615

    The latest versatile video coding (VVC) introduces some novel techniques such as quadtree with nested multi-type tree (QTMT), multiple transform selection (MTS) and multiple reference line (MRL). These tools improve compression efficiency compared with the previous standard H.265/HEVC, but they suffer from very high computational complexity. One of the most time-consuming parts of VVC intra coding is the coding tree unit (CTU) structure decision. In this paper, we propose a low-complexity multi-type tree (MT) pruning method for VVC intra coding. This method consists of lookahead search and MT pruning. The lookahead search process is performed to derive the approximate rate-distortion (RD) cost of each MT node at depth 2 or 3. Subsequently, the improbable MT nodes are pruned by different strategies under different cost errors. These strategies are designed according to the priority of the node. Experimental results show that the overall proposed algorithm can achieve 47.15% time saving with only 0.93% Bjøntegaard delta bit rate (BDBR) increase over natural scene sequences, and 45.39% time saving with 1.55% BDBR increase over screen content sequences, compared with the VVC reference software VTM 10.0. Such results demonstrate that our method achieves a good trade-off between computational complexity and compression quality compared to recent methods.

  • Concatenated Permutation Codes under Chebyshev Distance

    Motohiro KAWASUMI  Kenta KASAI  

     
    PAPER-Coding Theory

      Pubricized:
    2022/09/21
      Page(s):
    616-632

    Permutation codes are error-correcting codes over symmetric groups. We focus on permutation codes under Chebyshev (l) distance. A permutation code invented by Kløve et al. is of length n, size 2n-d and, minimum distance d. We denote the code by φn,d. This code is the largest known code of length n and minimum Chebyshev distance d > n/2 so far, to the best of the authors knowledge. They also devised efficient encoding and hard-decision decoding (HDD) algorithms that outperform the bounded distance decoding. In this paper, we derive a tight upper bound of decoding error probability of HDD. By factor graph formalization, we derive an efficient maximum a-posterior probability decoding algorithm for φn,d. We explore concatenating permutation codes of φn,d=0 with binary outer codes for more robust error correction. A naturally induced pseudo distance over binary outer codes successfully characterizes Chebyshev distance of concatenated permutation codes. Using this distance, we upper-bound the minimum Chebyshev distance of concatenated codes. We discover how to concatenate binary linear codes to achieve the upper bound. We derive the distance distribution of concatenated permutation codes with random outer codes. We demonstrate that the sum-product decoding performance of concatenated codes with outer low-density parity-check codes outperforms conventional schemes.

  • Tourism Application Considering Waiting Time

    Daiki SAITO  Jeyeon KIM  Tetsuya MANABE  

     
    PAPER-Intelligent Transport System

      Pubricized:
    2022/09/06
      Page(s):
    633-643

    Currently, the proportion of independent travel is increasing in Japan. Therefore, earlier studies supporting itinerary planning have been presented. However, these studies have only insufficiently considered rural tourism. For example, tourist often use public transportation during trips in rural areas, although it is often difficult for a tourist to plan an itinerary for public transportation. Even if an itinerary can be planned, it will entail long waiting times at the station or bus stop. Nevertheless, earlier studies have only insufficiently considered these elements in itinerary planning. On the other hand, navigation is necessary in addition to itinerary creation. Particularly, recent navigation often considers dynamic information. During trips using public transportation, schedule changes are important dynamic information. For example, tourist arrive at bus stop earlier than planned. In such case, the waiting time will be longer than the waiting time included in the itinerary. In contrast, if a person is running behind schedule, a risk arises of missing bus. Nevertheless, earlier studies have only insufficiently considered these schedule changes. In this paper, we construct a tourism application that considers the waiting time to improve the tourism experience in rural areas. We define waiting time using static waiting time and dynamic waiting time. Static waiting time is waiting time that is included in the itinerary. Dynamic waiting time is the waiting time that is created by schedule changes during a trip. With this application, static waiting times is considered in the planning function. The dynamic waiting time is considered in the navigation function. To underscore the effectiveness of this application, experiments of the planning function and experiments of the navigation function is conducted in Tsuruoka City, Yamagata Prefecture. Based on the results, we confirmed that a tourist can readily plan a satisfactory itinerary using the planning function. Additionally, we confirmed that Navigation function can use waiting times effectively by suggesting additional tourist spots.

  • A Data-Driven Gain Tuning Method for Automatic Hovering Control of Multicopters via Just-in-Time Modeling

    Tatsuya KAI  Ryouhei KAKURAI  

     
    LETTER-Systems and Control

      Pubricized:
    2022/08/29
      Page(s):
    644-646

    This study develops a new automatic hovering control method based on just-in-time modeling for a multicopter. Especially, the main aim is to compute gains of a feedback control law such that the multicopter hovers at a desired height and at a desired time without overshoot/undershoot. First, a database that contains various hovering data is constructed, and then the proposed method computes gains for a query input from the database. From simulation results, it turns out that the multicopter achieves control purposes, and hence the new method is effective.