The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

4301-4320hit(16314hit)

  • An Inconsistency Management Support System for Collaborative Software Development

    Phan Thi Thanh HUYEN  Koichiro OCHIMIZU  

     
    PAPER-Software Engineering

      Vol:
    E97-D No:1
      Page(s):
    22-33

    In collaborative software developments, many change processes implementing change requests are executed concurrently by different workers. The fact that the workers do not have sufficient information about the others' work and complicated dependencies among artifacts can lead to unexpected inconsistencies among the artifacts impacted by the changes. Most previous studies concentrated only on concurrent changes and considered them separately. However, even when the changes are not concurrent, inconsistencies may still happen if a worker does not recognize the impact of the changes made by other workers on his changes or the impact of his changes on other workers' changes. In addition, the changes in a change process are related to each other through their common target of realizing the change request and the dependencies among the changed artifacts. Therefore, to handle inconsistencies more effectively, we concentrate on both concurrent and non-concurrent changes, and the context of a change, i.e. the change process containing the change, rather than the ongoing changes only. In this paper, we present an inconsistency awareness mechanism and a Change Support Workflow Management System (CSWMS) that realizes this mechanism. By monitoring the progress of the change processes and the ongoing changes in the client workspaces, CSWMS can notify the workers of a (potential) inconsistency in advance along with the context of the inconsistency, that is, the changes causing the inconsistency and the change processes containing these changes. Based on the information provided by CSWMS, the workers can detect and resolve inconsistencies more easily and quickly. Therefore, our research can contribute to building a safer and more efficient collaborative software development environment.

  • A Sparse Modeling Method Based on Reduction of Cost Function in Regularized Forward Selection

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:1
      Page(s):
    98-106

    Regularized forward selection is viewed as a method for obtaining a sparse representation in a nonparametric regression problem. In regularized forward selection, regression output is represented by a weighted sum of several significant basis functions that are selected from among a large number of candidates by using a greedy training procedure in terms of a regularized cost function and applying an appropriate model selection method. In this paper, we propose a model selection method in regularized forward selection. For the purpose, we focus on the reduction of a cost function, which is brought by appending a new basis function in a greedy training procedure. We first clarify a bias and variance decomposition of the cost reduction and then derive a probabilistic upper bound for the variance of the cost reduction under some conditions. The derived upper bound reflects an essential feature of the greedy training procedure; i.e., it selects a basis function which maximally reduces the cost function. We then propose a thresholding method for determining significant basis functions by applying the derived upper bound as a threshold level and effectively combining it with the leave-one-out cross validation method. Several numerical experiments show that generalization performance of the proposed method is comparable to that of the other methods while the number of basis functions selected by the proposed method is greatly smaller than by the other methods. We can therefore say that the proposed method is able to yield a sparse representation while keeping a relatively good generalization performance. Moreover, our method has an advantage that it is free from a selection of a regularization parameter.

  • Sentence-Level Combination of Machine Translation Outputs with Syntactically Hybridized Translations

    Bo WANG  Yuanyuan ZHANG  Qian XU  

     
    LETTER-Natural Language Processing

      Vol:
    E97-D No:1
      Page(s):
    164-167

    We describe a novel idea to improve machine translation by combining multiple candidate translations and extra translations. Without manual work, extra translations can be generated by identifying and hybridizing the syntactic equivalents in candidate translations. Candidate and extra translations are then combined on sentence level for better general translation performance.

  • Adaptive Channel Power Partitioning Scheme in WCDMA Femto Cell

    Tae-Won BAN  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E97-B No:1
      Page(s):
    190-195

    Recently, small cell systems such as femto cell are being considered as a good alternative that can support the increasing demand for mobile data traffic because they can significantly enhance network capacity by increasing spatial reuse. In this paper, we analyze the coverage and capacity of a femto cell when it is deployed in a hotspot to reduce the traffic loads of neighboring macro base stations (BSs). Our analysis results show that the coverage and capacity of femto cell are seriously affected by surrounding signal environment and they can be greatly enhanced by adapting power allocation for channels to the surrounding environment. Thus, we propose an adaptive power partitioning scheme where power allocation for channels can be dynamically adjusted to suit the environment surrounding the femto cell. In addition, we numerically derive the optimal power allocation ratio for channels to optimize the performance of the femto cell in the proposed scheme. It is shown that the proposed scheme with the optimal channel power allocation significantly outperforms the conventional scheme with fixed power allocation for channels.

  • Virtual Continuous CWmin Control Scheme of WLAN

    Yuki SANGENYA  Fumihiro INOUE  Masahiro MORIKURA  Koji YAMAMOTO  Fusao NUNO  Takatoshi SUGIYAMA  

     
    PAPER-Foundations

      Vol:
    E97-A No:1
      Page(s):
    40-48

    In this paper, a priority control problem between uplink and downlink flows in IEEE 802.11 wireless LANs is considered. The minimum contention window size (CWmin) has a nonnegative integer value. CWmin control scheme is one of the solutions for priority control to achieve the fairness between links. However, it has the problem that CWmin control scheme cannot achieve precise priority control when the CWmin values become small. As the solution of this problem, this paper proposes a new CWmin control method called a virtual continuous CWmin control (VCCC) scheme. The key concept of this method is that it involves the use of small and large CWmin values probabilistically. The proposed scheme realizes the expected value of CWmin as a nonnegative real number and solves the precise priority control problem. Moreover, we proposed a theoretical analysis model for the proposed VCCC scheme. Computer simulation results show that the proposed scheme improves the throughput performance and achieves fairness between the uplink and the downlink flows in an infrastructure mode of the IEEE 802.11 based wireless LAN. Throughput of the proposed scheme is 31% higher than that of a conventional scheme when the number of wireless stations is 18. The difference between the theoretical analysis results and computer simulation results of the throughput is within 1% when the number of STAs is less than 10.

  • Randomness Leakage in the KEM/DEM Framework

    Hitoshi NAMIKI  Keisuke TANAKA  Kenji YASUNAGA  

     
    PAPER-Public Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    191-199

    Recently, there have been many studies on constructing cryptographic primitives that are secure even if some secret information leaks. In this paper, we consider the problem of constructing public-key encryption schemes that are resilient to leaking the randomness used in the encryption algorithm. In particular, we consider the case in which public-key encryption schemes are constructed from the KEM/DEM framework, and the leakage of randomness in the encryption algorithms of KEM and DEM occurs independently. For this purpose, we define a new security notion for KEM. Then we provide a generic construction of a public-key encryption scheme that is resilient to randomness leakage from any KEM scheme satisfying this security. Also we construct a KEM scheme that satisfies the security from hash proof systems.

  • An Accurate Packer Identification Method Using Support Vector Machine

    Ryoichi ISAWA  Tao BAN  Shanqing GUO  Daisuke INOUE  Koji NAKAO  

     
    PAPER-Foundations

      Vol:
    E97-A No:1
      Page(s):
    253-263

    PEiD is a packer identification tool widely used for malware analysis but its accuracy is becoming lower and lower recently. There exist two major reasons for that. The first is that PEiD does not provide a way to create signatures, though it adopts a signature-based approach. We need to create signatures manually, and it is difficult to catch up with packers created or upgraded rapidly. The second is that PEiD utilizes exact matching. If a signature contains any error, PEiD cannot identify the packer that corresponds to the signature. In this paper, we propose a new automated packer identification method to overcome the limitations of PEiD and report the results of our numerical study. Our method applies string-kernel-based support vector machine (SVM): it can measure the similarity between packed programs without our operations such as manually creating signature and it provides some error tolerant mechanism that can significantly reduce detection failure caused by minor signature violations. In addition, we use the byte sequence starting from the entry point of a packed program as a packer's feature given to SVM. That is, our method combines the advantages from signature-based approach and machine learning (ML) based approach. The numerical results on 3902 samples with 26 packer classes and 3 unpacked (not-packed) classes shows that our method achieves a high accuracy of 99.46% outperforming PEiD and an existing ML-based method that Sun et al. have proposed.

  • Investigation on Frequency Diversity Effects of Various Transmission Schemes Using Frequency Domain Equalizer for DFT-Precoded OFDMA

    Lianjun DENG  Teruo KAWAMURA  Hidekazu TAOKA  Mamoru SAWAHASHI  

     
    PAPER-Foundations

      Vol:
    E97-A No:1
      Page(s):
    30-39

    This paper presents frequency diversity effects of localized transmission, clustered transmission, and intra-subframe frequency hopping (FH) using a frequency domain equalizer (FDE) for discrete Fourier transform (DFT)-precoded Orthogonal Frequency Division Multiple Access (OFDMA). In the evaluations, we employ the normalized frequency mean square covariance (NFMSV) as a measure of the frequency diversity effect, i.e., randomization level of the frequency domain interleaving associated with turbo coding. Link-level computer simulation results show that frequency diversity is very effective in decreasing the required average received signal-to-noise power ratio (SNR) at the target average block error rate (BLER) using a linear minimum mean-square error (LMMSE) based FDE according to the increase in the entire transmission bandwidth for DFT-precoded OFDMA. Moreover, we show that the NFMSV is an accurate measure of the frequency diversity effect for the 3 transmission schemes for DFT-precoded OFDMA. We also clarify the frequency diversity effects of the 3 transmission schemes from the viewpoint of the required average received SNR satisfying the target average BLER for the various key radio parameters for DFT-precoded OFDMA in frequency-selective Rayleigh fading channels.

  • Analysis of Blacklist Update Frequency for Countering Malware Attacks on Websites

    Takeshi YAGI  Junichi MURAYAMA  Takeo HARIU  Sho TSUGAWA  Hiroyuki OHSAKI  Masayuki MURATA  

     
    PAPER-Internet

      Vol:
    E97-B No:1
      Page(s):
    76-86

    We proposes a method for determining the frequency for monitoring the activities of a malware download site used for malware attacks on websites. In recent years, there has been an increase in attacks exploiting vulnerabilities in web applications for infecting websites with malware and maliciously using those websites as attack platforms. One scheme for countering such attacks is to blacklist malware download sites and filter out access to them from user websites. However, a malware download site is often constructed through the use of an ordinary website that has been maliciously manipulated by an attacker. Once the malware has been deleted from the malware download site, this scheme must be able to unblacklist that site to prevent normal user websites from being falsely detected as malware download sites. However, if a malware download site is frequently monitored for the presence of malware, the attacker may sense this monitoring and relocate that malware on a different site. This means that an attack will not be detected until the newly generated malware download site is discovered. In response to these problems, we clarify the change in attack-detection accuracy caused by attacker behavior. This is done by modeling attacker behavior, specifying a state-transition model with respect to the blacklisting of a malware download site, and analyzing these models with synthetically generated attack patterns and measured attack patterns in an operation network. From this analysis, we derive the optimal monitoring frequency that maximizes the true detection rate while minimizing the false detection rate.

  • Fuzzy Metric Based Weight Assignment for Deinterlacing

    Gwanggil JEON  Young-Sup LEE  SeokHoon KANG  

     
    LETTER-Image

      Vol:
    E97-A No:1
      Page(s):
    440-443

    An effective interlaced-to-progressive scanning format conversion method is presented for the interpolation of interlaced images. On the basis of the weight assignment algorithm, the proposed method is composed of three stages: (1) straightforward interpolation with pre-determined six-tap filter, (2) fuzzy metric-based weight assignment, (3) updating the interpolation results. We first deinterlace the missing line with six-tap filter in the working window. Then we compute the local weight among the adjacent pixels with a fuzzy metric. Finally we deinterlace the missing pixels using the proposed interpolator. Comprehensive simulations conducted on different images and video sequences have proved the effectiveness of the proposed method, with significant improvement over conventional methods.

  • Adaptive Reversible Data Hiding via Integer-to-Integer Subband Transform and Adaptive Generalized Difference Expansion Method

    Taichi YOSHIDA  Taizo SUZUKI  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E97-A No:1
      Page(s):
    384-392

    We propose an adaptive reversible data hiding method with superior visual quality and capacity in which an adaptive generalized difference expansion (AGDE) method is applied to an integer-to-integer subband transform (I2I-ST). I2I-ST performs the reversible subband transform and the AGDE method is a state-of-the-art method of reversible data hiding. The results of experiments we performed objectively and perceptually show that the proposed method has better visual quality than conventional methods at the same embedding rate due to low variance in the frequency domain.

  • Handoff Delay-Based Call Admission Control in Cognitive Radio Networks

    Ling WANG  Qicong PENG  Qihang PENG  

     
    PAPER-Network

      Vol:
    E97-B No:1
      Page(s):
    49-55

    In this paper, we investigate how to achieve call admission control (CAC) for guaranteeing call dropping probability QoS which is caused by handoff timeout in cognitive radio (CR) networks. When primary user (PU) appears, spectrum handoff should be initiated to maintain secondary user (SU)'s link. We propose a novel virtual queuing (VQ) scheme to schedule spectrum handoff requests sent by multiple SUs. Unlike the conventional first-come-first-served (FCFS) scheduling, resuming transmission in the original channel has higher priority than switching to another channel. It costs less because it avoids the cost of signaling frequent spectrum switches. We characterize the handoff delay on the effect of PU's behavior and the number of SUs in CR networks. And user capacity under certain QoS requirement is derived as a guideline for CAC. The analytical results show that call dropping performance can be greatly improved by CAC when a large amount of SUs arrives fast as well as the VQ scheme is verified to reduce handoff cost compared to existing methods.

  • Distinguishers on Double-Branch Compression Function and Applications to Round-Reduced RIPEMD-128 and RIPEMD-160

    Yu SASAKI  Lei WANG  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    177-190

    This paper presents differential-based distinguishers against double-branch compression functions and applies them to ISO standard hash functions RIPEMD-128 and RIPEMD-160. A double-branch compression function computes two branch functions to update a chaining variable and then merges their outputs. For such a compression function, we observe that second-order differential paths will be constructed by finding a sub-path in each branch independently. This leads to 4-sum attacks on 47 steps (out of 64 steps) of RIPEMD-128 and 40 steps (out of 80 steps) of RIPEMD-160. Then new properties called a (partial) 2-dimension sum and a q-multi-second-order collision are considered. The partial 2-dimension sum is generated on 48 steps of RIPEMD-128 and 42 steps of RIPEMD-160, with complexities of 235 and 236, respectively. Theoretically, the 2-dimension sum is generated faster than the brute force attack up to 52 steps of RIPEMD-128 and 51 steps of RIPEMD-160, with complexities of 2101 and 2158, respectively. The results on RIPEMD-128 can also be viewed as q-multi-second-order collision attacks. The practical attacks have been implemented and examples are presented. We stress that our results do not impact to the security of full RIPEMD-128 and RIPEMD-160 hash functions.

  • Chosen-IV Correlation Power Analysis on KCipher-2 Hardware and a Masking-Based Countermeasure

    Takafumi HIBIKI  Naofumi HOMMA  Yuto NAKANO  Kazuhide FUKUSHIMA  Shinsaku KIYOMOTO  Yutaka MIYAKE  Takafumi AOKI  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    157-166

    This paper presents a chosen-IV (Initial Vector) correlation power analysis on the international standard stream cipher KCipher-2 together with an effective countermeasure. First, we describe a power analysis technique which can reveal the secret key (initial key) of KCipher-2 and then evaluate the validity of the CPA with experiments using both FPGA and ASIC implementations of KCipher-2 processors. This paper also proposes a masking-based countermeasure against the CPA. The concept of the proposed countermeasure is to mask intermediate data which pass through the non-linear function part including integer addition, substitution functions, and internal registers L1 and L2. We design two types of masked integer adders and two types of masked substitution circuits in order to minimize circuit area and delay, respectively. The effectiveness of the countermeasure is demonstrated through an experiment on the same FPGA platform. The performance of the proposed method is evaluated through the ASIC fabricated by TSMC 65nm CMOS process technology. In comparison with the conventional design, the design with the countermeasure can be achieved by the area increase of 1.6 times at most.

  • Towards Trusted Result Verification in Mass Data Processing Service

    Yan DING  Huaimin WANG  Peichang SHI  Hongyi FU  Xinhai XU  

     
    PAPER

      Vol:
    E97-B No:1
      Page(s):
    19-28

    Computation integrity is difficult to verify when mass data processing is outsourced. Current integrity protection mechanisms and policies verify results generated by participating nodes within a computing environment of service providers (SP), which cannot prevent the subjective cheating of SPs. This paper provides an analysis and modeling of computation integrity for mass data processing services. A third-party sampling-result verification method, named TS-TRV, is proposed to prevent lazy cheating by SPs. TS-TRV is a general solution of verification on the intermediate results of common MapReduce jobs, and it utilizes the powerful computing capability of SPs to support verification computing, thus lessening the computing and transmission burdens of the verifier. Theoretical analysis indicates that TS-TRV is effective on detecting the incorrect results with no false positivity and almost no false negativity, while ensuring the authenticity of sampling. Intensive experiments show that the cheating detection rate of TS-TRV achieves over 99% with only a few samples needed, the computation overhead is mainly on the SP, while the network transmission overhead of TS-TRV is only O(log N).

  • A New Higher Order Differential of CLEFIA

    Naoki SHIBAYAMA  Toshinobu KANEKO  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    118-126

    CLEFIA is a 128-bit block cipher proposed by Shirai et al. at FSE2007. It has been reported that CLEFIA has a 9-round saturation characteristic, in which 32bits of the output of 9-th round 112-th order differential equals to zero. By using this characteristic, a 14-round CLEFIA with 256-bit secret key is attacked with 2113 blocks of chosen plaintext and 2244.5 times of data encryption. In this paper, we focused on a higher order differential of CLEFIA. This paper introduces two new concepts for higher order differential which are control transform for the input and observation transform for the output. With these concepts, we found a new 6-round saturation characteristic, in which 24bits of the output of 6-th round 9-th order differential equals to zero. We also show a new 9-round saturation characteristic using 105-th order differential which is a 3-round extension of the 6-round one. If we use it, instead of 112-th order differential, using the meet-in-the-middle attack technique for higher order differential table, the data and computational complexity for the attack to 14-round CLEFIA can be reduced to around 2-5, 2-34 of the conventional attack, respectively.

  • Pattern Reconstruction for Deviated AUT in Spherical Measurement by Using Spherical Waves

    Yang MIAO  Jun-ichi TAKADA  

     
    PAPER-Antennas and Propagation

      Vol:
    E97-B No:1
      Page(s):
    105-113

    To characterize an antenna, the acquisition of its three-dimensional radiation pattern is the fundamental requirement. Spherical antenna measurement is a practical approach to measuring antenna patterns in spherical geometry. However, due to the limitations of measurement range and measurement time, the measured samples may either be incomplete on scanning sphere, or be inadequate in terms of the sampling interval. Therefore there is a need to extrapolate and interpolate the measured samples. Spherical wave expansion, whose band-limited property is derived from the sampling theorem, provides a good tool for reconstructing antenna patterns. This research identifies the limitation of the conventional algorithm when reconstructing the pattern of an antenna which is not located at the coordinate origin of the measurement set-up. A novel algorithm is proposed to overcome the limitation by resampling between the unprimed and primed (where the antenna is centred) coordinate systems. The resampling of measured samples from the unprimed coordinate to the primed coordinate can be conducted by translational phase shift, and the resampling of reconstructed pattern from the primed coordinate back to the unprimed coordinate can be accomplished by rotation and translation of spherical waves. The proposed algorithm enables the analytical and continuous pattern reconstruction, even under the severe sampling condition for deviated AUT. Numerical investigations are conducted to validate the proposed algorithm.

  • Cryptanalysis of 249-, 250-, ..., 256-Bit Key HyRAL via Equivalent Keys

    Yuki ASANO  Shingo YANAGIHARA  Tetsu IWATA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E97-A No:1
      Page(s):
    371-383

    HyRAL is a blockcipher whose block size is 128bits, and it supports the key lengths of 128, 129, ..., 256bits. The cipher was proposed for the CRYPTREC project, and previous analyses did not identify any security weaknesses. In this paper, we first consider the longest key version, 256-bit key HyRAL, and present the analysis in terms of equivalent keys. We first show that there are 251.0 equivalent keys (or 250.0 pairs of equivalent keys). Next, we propose an algorithm that derives an instance of equivalent keys with the expected time complexity of 248.8 encryptions and a limited amount of memory. Finally, we implement the proposed algorithm and fully verify its correctness by showing several instances of equivalent keys. We then consider shorter key lengths, and show that there are equivalent keys in 249-, 250-, ..., 255-bit key HyRAL. For each of these key lengths, we present the expected time complexity to derive an instance of equivalent keys.

  • Cryptanalyses on a Merkle-Damgård Based MAC — Almost Universal Forgery and Distinguishing-H Attacks

    Yu SASAKI  

     
    PAPER-Symmetric Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    167-176

    This paper presents two types of cryptanalysis on a Merkle-Damgård hash based MAC, which computes a MAC value of a message M by Hash(K||l||M) with a shared key K and the message length l. This construction is often called LPMAC. Firstly, we present a distinguishing-H attack against LPMAC instantiated with any narrow-pipe Merkle-Damgård hash function with O(2n/2) queries, which indicates the incorrectness of the widely believed assumption that LPMAC instantiated with a secure hash function should resist the distinguishing-H attack up to 2n queries. In fact, all of the previous distinguishing-H attacks considered dedicated attacks depending on the underlying hash algorithm, and most of the cases, reduced rounds were attacked with a complexity between 2n/2 and 2n. Because it works in generic, our attack updates these results, namely full rounds are attacked with O(2n/2) complexity. Secondly, we show that an even stronger attack, which is a powerful form of an almost universal forgery attack, can be performed on LPMAC. In this setting, attackers can modify the first several message-blocks of a given message and aim to recover an internal state and forge the MAC value. For any narrow-pipe Merkle-Damgård hash function, our attack can be performed with O(2n/2) queries. These results show that the length prepending scheme is not enough to achieve a secure MAC.

  • Relation between Verifiable Random Functions and Convertible Undeniable Signatures, and New Constructions

    Kaoru KUROSAWA  Ryo NOJIMA  Le Trieu PHONG  

     
    PAPER-Public Key Based Cryptography

      Vol:
    E97-A No:1
      Page(s):
    215-224

    Verifiable random functions (VRF), proposed in 1999, and selectively convertible undeniable signature (SCUS) schemes, proposed in 1990, are apparently thought as independent primitives in the literature. In this paper, we show that they are tightly related in the following sense: VRF is exactly SCUS; and the reverse also holds true under a condition. This directly yields several deterministic SCUS schemes based on existing VRF constructions. In addition, we create a new probabilistic SCUS scheme, which is very compact. We build efficient confirmation and disavowal protocols for the proposed SCUS schemes, based on what we call zero-knowledge protocols for generalized DDH and non-DDH. These zero-knowledge protocols are built either sequential, concurrent, or universally composable.

4301-4320hit(16314hit)