The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] EE(4079hit)

1241-1260hit(4079hit)

  • A New Higher Order Differential of CLEFIA

    Naoki SHIBAYAMA  Toshinobu KANEKO  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    118-126

    CLEFIA is a 128-bit block cipher proposed by Shirai et al. at FSE2007. It has been reported that CLEFIA has a 9-round saturation characteristic, in which 32bits of the output of 9-th round 112-th order differential equals to zero. By using this characteristic, a 14-round CLEFIA with 256-bit secret key is attacked with 2113 blocks of chosen plaintext and 2244.5 times of data encryption. In this paper, we focused on a higher order differential of CLEFIA. This paper introduces two new concepts for higher order differential which are control transform for the input and observation transform for the output. With these concepts, we found a new 6-round saturation characteristic, in which 24bits of the output of 6-th round 9-th order differential equals to zero. We also show a new 9-round saturation characteristic using 105-th order differential which is a 3-round extension of the 6-round one. If we use it, instead of 112-th order differential, using the meet-in-the-middle attack technique for higher order differential table, the data and computational complexity for the attack to 14-round CLEFIA can be reduced to around 2-5, 2-34 of the conventional attack, respectively.

  • A New Necessary Condition for Feedback Functions of de Bruijn Sequences

    Zhongxiao WANG  Wenfeng QI  Huajin CHEN  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    152-156

    Recently nonlinear feedback shift registers (NFSRs) have frequently been used as basic building blocks for stream ciphers. A major problem concerning NFSRs is to construct NFSRs which generate de Bruijn sequences, namely maximum period sequences. In this paper, we present a new necessary condition for NFSRs to generate de Bruijn sequences. The new condition can not be deduced from the previously proposed necessary conditions. It is shown that the number of NFSRs whose feedback functions satisfy all the previous necessary conditions but not the new one is very large.

  • Cryptanalyses on a Merkle-Damgård Based MAC — Almost Universal Forgery and Distinguishing-H Attacks

    Yu SASAKI  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    167-176

    This paper presents two types of cryptanalysis on a Merkle-Damgård hash based MAC, which computes a MAC value of a message M by Hash(K||l||M) with a shared key K and the message length l. This construction is often called LPMAC. Firstly, we present a distinguishing-H attack against LPMAC instantiated with any narrow-pipe Merkle-Damgård hash function with O(2n/2) queries, which indicates the incorrectness of the widely believed assumption that LPMAC instantiated with a secure hash function should resist the distinguishing-H attack up to 2n queries. In fact, all of the previous distinguishing-H attacks considered dedicated attacks depending on the underlying hash algorithm, and most of the cases, reduced rounds were attacked with a complexity between 2n/2 and 2n. Because it works in generic, our attack updates these results, namely full rounds are attacked with O(2n/2) complexity. Secondly, we show that an even stronger attack, which is a powerful form of an almost universal forgery attack, can be performed on LPMAC. In this setting, attackers can modify the first several message-blocks of a given message and aim to recover an internal state and forge the MAC value. For any narrow-pipe Merkle-Damgård hash function, our attack can be performed with O(2n/2) queries. These results show that the length prepending scheme is not enough to achieve a secure MAC.

  • A Method of Parallelizing Consensuses for Accelerating Byzantine Fault Tolerance

    Junya NAKAMURA  Tadashi ARARAGI  Toshimitsu MASUZAWA  Shigeru MASUYAMA  

     
    PAPER-Dependable Computing

      Vol:
    E97-D No:1
      Page(s):
    53-64

    We propose a new method that accelerates asynchronous Byzantine Fault Tolerant (BFT) protocols designed on the principle of state machine replication. State machine replication protocols ensure consistency among replicas by applying operations in the same order to all of them. A naive way to determine the application order of the operations is to repeatedly execute the BFT consensus to determine the next executed operation, but this may introduce inefficiency caused by waiting for the completion of the previous execution of the consensus protocol. To reduce this inefficiency, our method allows parallel execution of the consensuses while keeping consistency of the consensus results at the replicas. In this paper, we also prove the correctness of our method and experimentally compare it with the existing method in terms of latency and throughput. The evaluation results show that our method makes a BFT protocol three or four times faster than the existing one when some machines or message transmissions are delayed.

  • About Validity Checks of Augmented PAKE in IEEE 1363.2 and ISO/IEC 11770-4

    SeongHan SHIN  Kazukuni KOBARA  

     
    LETTER-Cryptography and Information Security

      Vol:
    E97-A No:1
      Page(s):
    413-417

    An augmented PAKE (Password-Authenticated Key Exchange) protocol provides password-only authentication in the presence of an attacker, establishment of session keys between the involving parties, and extra protection for server compromise (i.e., exposure of password verification data). Among many augmented PAKE protocols, AMP variants (AMP2 [16] and AMP+ [15]) have been standardized in IEEE 1363.2 [9] and ISO/IEC 11770-4 [10]. In this paper, we thoroughly investigate APKAS-AMP (based on AMP2 [16]) and KAM3 (based on AMP+ [15]) which require several validity checks on the values, received and computed by the parties, when using a secure prime. After showing some attacks on APKAS-AMP and KAM3, we suggest new sanity checks that are clear and sufficient to prevent an attacker from doing these attacks.

  • Towards Trusted Result Verification in Mass Data Processing Service

    Yan DING  Huaimin WANG  Peichang SHI  Hongyi FU  Xinhai XU  

     
    PAPER

      Vol:
    E97-B No:1
      Page(s):
    19-28

    Computation integrity is difficult to verify when mass data processing is outsourced. Current integrity protection mechanisms and policies verify results generated by participating nodes within a computing environment of service providers (SP), which cannot prevent the subjective cheating of SPs. This paper provides an analysis and modeling of computation integrity for mass data processing services. A third-party sampling-result verification method, named TS-TRV, is proposed to prevent lazy cheating by SPs. TS-TRV is a general solution of verification on the intermediate results of common MapReduce jobs, and it utilizes the powerful computing capability of SPs to support verification computing, thus lessening the computing and transmission burdens of the verifier. Theoretical analysis indicates that TS-TRV is effective on detecting the incorrect results with no false positivity and almost no false negativity, while ensuring the authenticity of sampling. Intensive experiments show that the cheating detection rate of TS-TRV achieves over 99% with only a few samples needed, the computation overhead is mainly on the SP, while the network transmission overhead of TS-TRV is only O(log N).

  • Performance Analysis of NAV Based Contention Window in IEEE 802.11 LAN

    Seung-Sik CHOI  

     
    LETTER-Mobile Information Network and Personal Communications

      Vol:
    E97-A No:1
      Page(s):
    436-439

    In IEEE 802.11 standard, the contention window (CW) sizes are not efficient because it does not consider the system load. There has been several mechanisms to achieve the maximum throughput by the optimal CW. But some parameters such as the number of stations and system utilization are difficult to measure in WLAN systems. To solve this problem, we use the network allocation vector (NAV) which represents the transmission of other stations. This parameter can be used to measure the system load. Thus, the CW sizes can be estimated by the system load. In this paper, we derive the analytical model for the optimal CW sizes and the maximum throughput using the NAV and show the relationships between the CW sizes, the throughput and the NAV.

  • Method of Image Green's Function in Grating Theory: Extinction Error Field

    Junichi NAKAYAMA  Yasuhiko TAMURA  

     
    BRIEF PAPER-Periodic Structures

      Vol:
    E97-C No:1
      Page(s):
    40-44

    This paper deals with an integral equation method for analyzing the diffraction of a transverse magnetic (TM) plane wave by a perfectly conductive periodic surface. In the region below the periodic surface, the extinction theorem holds, and the total field vanishes if the field solution is determined exactly. For an approximate solution, the extinction theorem does not hold but an extinction error field appears. By use of an image Green's function, new formulae are given for the extinction error field and the mean square extinction error (MSEE), which may be useful as a validity criterion. Numerical examples are given to demonstrate that the formulae work practically even at a critical angle of incidence.

  • Virtual Continuous CWmin Control Scheme of WLAN

    Yuki SANGENYA  Fumihiro INOUE  Masahiro MORIKURA  Koji YAMAMOTO  Fusao NUNO  Takatoshi SUGIYAMA  

     
    PAPER-Foundations

      Vol:
    E97-A No:1
      Page(s):
    40-48

    In this paper, a priority control problem between uplink and downlink flows in IEEE 802.11 wireless LANs is considered. The minimum contention window size (CWmin) has a nonnegative integer value. CWmin control scheme is one of the solutions for priority control to achieve the fairness between links. However, it has the problem that CWmin control scheme cannot achieve precise priority control when the CWmin values become small. As the solution of this problem, this paper proposes a new CWmin control method called a virtual continuous CWmin control (VCCC) scheme. The key concept of this method is that it involves the use of small and large CWmin values probabilistically. The proposed scheme realizes the expected value of CWmin as a nonnegative real number and solves the precise priority control problem. Moreover, we proposed a theoretical analysis model for the proposed VCCC scheme. Computer simulation results show that the proposed scheme improves the throughput performance and achieves fairness between the uplink and the downlink flows in an infrastructure mode of the IEEE 802.11 based wireless LAN. Throughput of the proposed scheme is 31% higher than that of a conventional scheme when the number of wireless stations is 18. The difference between the theoretical analysis results and computer simulation results of the throughput is within 1% when the number of STAs is less than 10.

  • Study of Coordinated Set of Coordinated Multi-Point Transmission with Limited Feedback

    Jianxin DAI  Ming CHEN  Mei ZHAO  Ziyan JIA  Zhengquan LI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:1
      Page(s):
    171-181

    In the Coordinated Multi-Point (CoMP) system under the condition of limited feedback, a reasonable coordinated set relies heavily on the splitting factor that is used to divide the total feedback bits into channel direction information (CDI) feedback bits and channel quality information (CQI) feedback bits. The relation of splitting factor and coordinated set is examined in this paper. After defining a penalty factor, we derive the net ergodic capacity optimization problem, whose variables to be optimized are the number of coordinated BSs, the divided area's radius and the splitting factor. According to an existing codebook and the quantized channel error model, the downlink received signal model is updated after adding the splitting factor. Through random matrix knowledge, the stochastic property of this model is obtained. A close approximate expression including the splitting factor to be optimized related to coordinated set is given. In addition, a revised adaptive feedback scheme is proposed to split the feedback bits. Simulation results show that the proposed scheme provides a significant performance gain, especially as the user velocity is high.

  • The RSA Group Is Adaptive Pseudo-Free under the RSA Assumption

    Masayuki FUKUMITSU  Shingo HASEGAWA  Shuji ISOBE  Hiroki SHIZUYA  

     
    PAPER-Public Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    200-214

    The notion of pseudo-free groups was first introduced and formalized by Hohenberger and Rivest in order to unify cryptographic assumptions. Catalano, Fiore and Warinschi proposed a generalized notion called adaptive pseudo-free groups, and showed that the RSA group $Z_N^ imes$ is adaptive pseudo-free with some specific parametric distribution under the strong RSA assumption. In this paper, we develop an alternative parametric distribution and show that the RSA group $Z_N^ imes$ is adaptive pseudo-free with the parametric distribution under the RSA assumption rather than the strong RSA assumption.

  • Performance Analysis of MIMO/FSO Systems Using SC-QAM Signaling over Atmospheric Turbulence Channels

    Trung HA DUYEN  Anh T. PHAM  

     
    PAPER-Foundations

      Vol:
    E97-A No:1
      Page(s):
    49-56

    We theoretically study the performance of multiple-input multiple-output (MIMO) free-space optical (FSO) systems using subcarrier quadrature modulation (SC-QAM) signaling. The system average symbol-error rate (ASER) is derived taking into account the atmospheric turbulence effects on the MIMO/FSO channel, which is modeled by log-normal and the gamma-gamma distributions for weak and moderate-to-strong turbulence conditions. We quantitatively discuss the influence of index of refraction structure parameter, link distance, and different MIMO configurations on the system ASER. We also analytically derive and discuss the MIMO/FSO average (ergodic) channel capacity (ACC), which is expressed in terms of average spectral efficiency (ASE), under the impact of various channel conditions. Monte Carlo simulations are also performed to validate the mathematical analysis, and a good agreement between numerical and simulation results is confirmed.

  • Bit-Parallel Cubing Computation over GF(3m) for Irreducible Trinomials

    Sun-Mi PARK  Ku-Young CHANG  Dowon HONG  Changho SEO  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E97-A No:1
      Page(s):
    347-353

    We propose a parallel pth powering method over an arbitrary finite field GF(pm). Using the proposed method, we present the explicit formulae for the computation of cubing over a ternary field GF(3m) which is defined by irreducible trinomials. We show that the field cubing computation for irreducible trinomials, which plays an important role in calculating pairing, can be implemented very efficiently.

  • Mining Knowledge on Relationships between Objects from the Web

    Xinpeng ZHANG  Yasuhito ASANO  Masatoshi YOSHIKAWA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:1
      Page(s):
    77-88

    How do global warming and agriculture influence each other? It is possible to answer the question by searching knowledge about the relationship between global warming and agriculture. As exemplified by this question, strong demands exist for searching relationships between objects. Mining knowledge about relationships on Wikipedia has been studied. However, it is desired to search more diverse knowledge about relationships on the Web. By utilizing the objects constituting relationships mined from Wikipedia, we propose a new method to search images with surrounding text that include knowledge about relationships on the Web. Experimental results show that our method is effective and applicable in searching knowledge about relationships. We also construct a relationship search system named “Enishi” based on the proposed new method. Enishi supplies a wealth of diverse knowledge including images with surrounding text to help users to understand relationships deeply, by complementarily utilizing knowledge from Wikipedia and the Web.

  • Alignment Kernels Based on a Generalization of Alignments

    Kilho SHIN  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E97-D No:1
      Page(s):
    1-10

    This paper shows a way to derive positive definite kernels from edit distances. It is well-known that, if a distance d is negative definite, e-λd is positive definite for any λ > 0. This property provides us the opportunity to apply useful techniques of kernel multivariate analysis to the features of data captured by means of the distance. However, the known instances of edit distance are not always negative definite. Even worse, it is usually not easy to examine whether a given instance of edit distance is negative definite. This paper introduces alignment kernels to present an alternative means to derive kernels from edit distance. The most important advantage of the alignment kernel consists in its easy-to-check sufficient condition for the positive definiteness. In fact, when we surveyed edit distances for strings, trees and graphs, all but one are instantly verified to meet the condition and therefore proven to be positive definite.

  • A Novel Low Computational Complexity Power Assignment Method for Non-orthogonal Multiple Access Systems

    Anxin LI  Atsushi HARADA  Hidetoshi KAYAMA  

     
    PAPER-Resource Allocation

      Vol:
    E97-A No:1
      Page(s):
    57-68

    Multiple access (MA) technology is of most importance for beyond long term evolution (LTE) system. Non-orthogonal multiple access (NOMA) utilizing power domain and advanced receiver has been considered as a candidate MA technology recently. In this paper, power assignment method, which plays a key role in performance of NOMA, is investigated. The power assignment on the basis of maximizing geometric mean user throughput requires exhaustive search and thus has an unacceptable computational complexity for practical systems. To solve this problem, a novel power assignment method is proposed by exploiting tree search and characteristic of serial interference cancellation (SIC) receiver. The proposed method achieves the same performance as the exhaustive search while greatly reduces the computational complexity. On the basis of the proposed power assignment method, the performance of NOMA is investigated by link-level and system-level simulations in order to provide insight into suitability of using NOMA for future MA. Simulation results verify effectiveness of the proposed power assignment method and show NOMA is a very promising MA technology for beyond LTE system.

  • Double-Layer Plate-Laminated Waveguide Slot Array Antennas for a 39GHz Band Fixed Wireless Access System

    Miao ZHANG  Jiro HIROKAWA  Makoto ANDO  

     
    PAPER-Antennas and Propagation

      Vol:
    E97-B No:1
      Page(s):
    122-128

    A point-to-point fixed wireless access (FWA) system with a maximum throughput of 1Gbps has been developed in the 39GHz band. A double-layer plate-laminated waveguide slot array antenna is successfully realized with specific considerations of practical application. The antenna is designed so as to hold the VSWR under 1.5. The antenna input as well as feeding network is configured to reduce the antenna profile as well as the antenna weight. In addition, integrating the antenna into a wireless terminal is taken into account. A shielding wall, whose effectiveness is experimentally demonstrated, is set in the middle of the wireless terminal to achieve the spatial isolation of more than 65dB between two antennas on the H-plane. 30 test antennas are fabricated by diffusion bonding of thin metal plates, to investigate the tolerance and mass-productivity of this process. An aluminum antenna, which has the advantages of light weight and anti-aging, is also fabricated and evaluated with an eye to the future.

  • Comprehensive Study of Integral Analysis on LBlock

    Yu SASAKI  Lei WANG  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    127-138

    The current paper presents an integral cryptanalysis in the single-key setting against light-weight block-cipher LBlock reduced to 22 rounds. Our attack uses the same 15-round integral distinguisher as the previous attacks, but many techniques are taken into consideration in order to achieve comprehensive understanding of the attack; choosing the best balanced-byte position, meet-in-the-middle technique to identify right key candidates, partial-sum technique, relations among subkeys, and combination of the exhaustive search with the integral analysis. Our results indicate that the integral cryptanalysis is particularly useful for LBlock like structures. At the end of this paper, which factor makes the LBlock structure weak against the integral cryptanalysis is discussed. Because designing light-weight cryptographic primitives is an actively discussed topic, we believe that this paper returns some useful feedback to future designs.

  • An Exact Approach for GPC-Based Compressor Tree Synthesis

    Taeko MATSUNAGA  Shinji KIMURA  Yusuke MATSUNAGA  

     
    PAPER-Logic Synthesis, Test and Verification

      Vol:
    E96-A No:12
      Page(s):
    2553-2560

    Multi-operand adders that calculate the summation of more than two operands usually consist of compressor trees, which reduce the number of operands to two without any carry propagation, and carry-propagate adders for the two operands in the ASIC implementation. Compressor trees that consist of full adders and half adders cannot be implemented efficiently on LUT-based FPGAs, and carry-chains or dedicated structures have been utilized to produce multi-operand adders on FPGAs. Recent studies indicate that compressor trees can be implemented efficiently on LUTs using Generalized Parallel Counters (GPCs) as the building blocks of compressor trees. This paper addresses the problem of synthesizing compressor trees based on GPCs. Based on the observation that characteristics such as the area, power, and delay correlate roughly to the total number and the maximum level of GPCs, the target problem can be regarded as a minimization problem for the total number of GPCs and the maximum levels of the GPCs, for which an ILP-based approach is proposed. The key point of our formulation is not to model the problem based on the structures of compressor trees like the existing approach, but instead the compression process itself is used to reduce the number of variables and constraints in the ILP formulation. The experimental results demonstrate the advantage of our formulation in terms of the quality and runtime.

  • An Approximated Selection Algorithm for Combinations of Content with Virtual Local Server for Traffic Localization in Peer-Assisted Content Delivery Networks

    Naoya MAKI  Ryoichi SHINKUMA  Tatsuro TAKAHASHI  

     
    PAPER

      Vol:
    E96-D No:12
      Page(s):
    2684-2695

    Our prior papers proposed a traffic engineering scheme to further localize traffic in peer-assisted content delivery networks (CDNs). This scheme periodically combines the content files and allows them to obtain the combined content files while keeping the price unchanged from the single-content price in order to induce altruistic clients to download content files that are most likely to contribute to localizing network traffic. However, the selection algorithm in our prior work determined which and when content files should be combined according to the cache states of all clients, which is a kind of unrealistic assumption in terms of computational complexity. This paper proposes a new concept of virtual local server to reduce the computational complexity. We could say that the source server in our mechanism has a virtual caching network inside that reflects the cache states of all clients in the ‘actual’ caching network and combines content files based on the virtual caching network. In this paper, without determining virtual caching network according to the cache states of all clients, we approximately estimated the virtual caching network from the cache states of the virtual local server of the local domain, which is the aggregated cache state of only altruistic clients in a local domain. Furthermore, we proposed a content selection algorithm based on a virtual caching network. In this paper, we used news life-cycle model as a content model that had the severe changes in cache states, which was a striking instance of dynamic content models. Computer simulations confirmed that our proposed algorithm successfully localized network traffic.

1241-1260hit(4079hit)