The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] MPO(945hit)

141-160hit(945hit)

  • Impossible Differential Attack on Reduced Round SPARX-128/256

    Muhammad ELSHEIKH  Mohamed TOLBA  Amr M. YOUSSEF  

     
    LETTER-Cryptography and Information Security

      Vol:
    E101-A No:4
      Page(s):
    731-733

    SPARX-128/256 is one of the two versions of the SPARX-128 block cipher family. It has 128-bit block size and 256-bit key size. SPARX has been developed using ARX-based S-boxes with the aim of achieving provable security against single-trail differential and linear cryptanalysis. In this letter, we propose 20-round impossible differential distinguishers for SPARX-128. Then, we utilize these distinguishers to attack 24 rounds (out of 40 rounds) of SPARX-128/256. Our attack has time complexity of 2232 memory accesses, memory complexity of 2160.81 128-bit blocks, and data complexity of 2104 chosen plaintexts.

  • Deep Neural Network Based Monaural Speech Enhancement with Low-Rank Analysis and Speech Present Probability

    Wenhua SHI  Xiongwei ZHANG  Xia ZOU  Meng SUN  Wei HAN  Li LI  Gang MIN  

     
    LETTER-Noise and Vibration

      Vol:
    E101-A No:3
      Page(s):
    585-589

    A monaural speech enhancement method combining deep neural network (DNN) with low rank analysis and speech present probability is proposed in this letter. Low rank and sparse analysis is first applied on the noisy speech spectrogram to get the approximate low rank representation of noise. Then a joint feature training strategy for DNN based speech enhancement is presented, which helps the DNN better predict the target speech. To reduce the residual noise in highly overlapping regions and high frequency domain, speech present probability (SPP) weighted post-processing is employed to further improve the quality of the speech enhanced by trained DNN model. Compared with the supervised non-negative matrix factorization (NMF) and the conventional DNN method, the proposed method obtains improved speech enhancement performance under stationary and non-stationary conditions.

  • The Declarative and Reusable Path Composition for Semantic Web-Driven SDN

    Xi CHEN  Tao WU  Lei XIE  

     
    PAPER-Network

      Pubricized:
    2017/08/29
      Vol:
    E101-B No:3
      Page(s):
    816-824

    The centralized controller of SDN enables a global topology view of the underlying network. It is possible for the SDN controller to achieve globally optimized resource composition and utilization, including optimized end-to-end paths. Currently, resource composition in SDN arena is usually conducted in an imperative manner where composition logics are explicitly specified in high level programming languages. It requires strong programming and OpenFlow backgrounds. This paper proposes declarative path composition, namely Compass, which offers a human-friendly user interface similar to natural language. Borrowing methodologies from Semantic Web, Compass models and stores SDN resources using OWL and RDF, respectively, to foster the virtualized and unified management of the network resources regardless of the concrete controller platform. Besides, path composition is conducted in a declarative manner where the user merely specifies the composition goal in the SPARQL query language instead of explicitly specifying concrete composition details in programming languages. Composed paths are also reused based on similarity matching, to reduce the chance of time-consuming path composition. The experiment results reflect the applicability of Compass in path composition and reuse.

  • Accuracy Improvement of Characteristic Basis Function Method by Using Multilevel Approach

    Tai TANAKA  Yoshio INASAWA  Naofumi YONEDA  Hiroaki MIYASHITA  

     
    PAPER-Electromagnetic Theory

      Vol:
    E101-C No:2
      Page(s):
    96-103

    A method is proposed for improving the accuracy of the characteristic basis function method (CBFM) using the multilevel approach. With this technique, CBFs taking into account multiple scattering calculated for each block (IP-CBFs; improved primary CBFs) are applied to CBFM using a multilevel approach. By using IP-CBFs, the interaction between blocks is taken into account, and thus it is possible to reduce the number of CBFs while maintaining accuracy, even if the multilevel approach is used. The radar cross section (RCS) of a cube, a cavity, and a dielectric sphere were analyzed using the proposed CBFs, and as a result it was found that accuracy is improved over the conventional method, despite no major change in the number of CBFs.

  • FPGA Components for Integrating FPGAs into Robot Systems

    Takeshi OHKAWA  Kazushi YAMASHINA  Hitomi KIMURA  Kanemitsu OOTSU  Takashi YOKOTA  

     
    PAPER-Emerging Applications

      Pubricized:
    2017/11/17
      Vol:
    E101-D No:2
      Page(s):
    363-375

    A component-oriented FPGA design platform is proposed for robot system integration. FPGAs are known to be a power-efficient hardware platform, but the development cost of FPGA-based systems is currently too high to integrate them into robot systems. To solve this problem, we propose an FPGA component that allows FPGA devices to be easily integrated into robot systems based on the Robot Operating System (ROS). ROS-compliant FPGA components offer a seamless interface between the FPGA hardware and software running on the CPU. Two experiments were conducted using the proposed components. For the first experiment, the results show that the execution time of an FPGA component for image processing was 1.7 times faster than that of the original software-based component and was 2.51 times more power efficient than an ordinary PC processor, despite substantial communication overhead. The second experiment showed that an FPGA component for sensor fusion was able to process multiple sensor inputs efficiently and with very low latency via parallel processing.

  • Extended Personalized Individual Semantics with 2-Tuple Linguistic Preference for Supporting Consensus Decision Making

    Haiyan HUANG  Chenxi LI  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2017/11/22
      Vol:
    E101-D No:2
      Page(s):
    387-395

    Considering that different people are different in their linguistic preference and in order to determine the consensus state when using Computing with Words (CWW) for supporting consensus decision making, this paper first proposes an interval composite scale based 2-tuple linguistic model, which realizes the process of translation from word to interval numerical and the process of retranslation from interval numerical to word. Second, this paper proposes an interval composite scale based personalized individual semantics model (ICS-PISM), which can provide different linguistic representation models for different decision-makers. Finally, this paper proposes a consensus decision making model with ICS-PISM, which includes a semantic translation and retranslation phase during decision process and determines the consensus state of the whole decision process. These models proposed take into full consideration that human language contains vague expressions and usually real-world preferences are uncertain, and provide efficient computation models to support consensus decision making.

  • Hierarchical Control of Concurrent Discrete Event Systems with Linear Temporal Logic Specifications

    Ami SAKAKIBARA  Toshimitsu USHIO  

     
    INVITED PAPER

      Vol:
    E101-A No:2
      Page(s):
    313-321

    In this paper, we study a control problem of a concurrent discrete event system, where several subsystems are partially synchronized via shared events, under local and global constraints described by linear temporal logic formulas. We propose a hierarchical control architecture consisting of local supervisors and a coordinator. While the supervisors ensure the local requirements, the coordinator decides which shared events to be disabled so as to satisfy the global specification. First, we construct Rabin games to obtain local supervisors. Next, we reduce them based on shared transitions. Finally, we construct a global Rabin game from the reduced supervisors and a deterministic Rabin automaton that accepts every run satisfying the global specification. By solving it, we obtain a coordinator that disables shared events to guarantee the global requirement. Moreover, the concurrent system controlled by the coordinator and the local supervisors is deadlock-free.

  • Using Hierarchical Scenarios to Predict the Reliability of Component-Based Software

    Chunyan HOU  Jinsong WANG  Chen CHEN  

     
    PAPER-Software Engineering

      Pubricized:
    2017/11/07
      Vol:
    E101-D No:2
      Page(s):
    405-414

    System scenarios that derived from system design specification play an important role in the reliability engineering of component-based software systems. Several scenario-based approaches have been proposed to predict the reliability of a system at the design time, most of them adopt flat construction of scenarios, which doesn't conform to software design specifications and is subject to introduce state space explosion problem in the large systems. This paper identifies various challenges related to scenario modeling at the early design stages based on software architecture specification. A novel scenario-based reliability modeling and prediction approach is introduced. The approach adopts hierarchical scenario specification to model software reliability to avoid state space explosion and reduce computational complexity. Finally, the evaluation experiment shows the potential of the approach.

  • Changes of Evaluation Values on Component Rank Model by Taking Code Clones into Consideration

    Reishi YOKOMORI  Norihiro YOSHIDA  Masami NORO  Katsuro INOUE  

     
    PAPER-Software System

      Pubricized:
    2017/10/05
      Vol:
    E101-D No:1
      Page(s):
    130-141

    There are many software systems that have been used and maintained for a long time. By undergoing such a maintenance process, similar code fragments were intentionally left in the source code of such software, and knowing how to manage a software system that contains a lot of similar code fragments becomes a major concern. In this study, we proposed a method to pick up components that were commonly used in similar code fragments from a target software system. This method was realized by using the component rank model and by checking the differences of evaluation values for each component before and after merging components that had similar code fragments. In many cases, components whose evaluation value had decreased would be used by both the components that were merged, so we considered that these components were commonly used in similar code fragments. Based on the proposed approach, we implemented a system to calculate differences of evaluation values for each component, and conducted three evaluation experiments to confirm that our method was useful for detecting components that were commonly used in similar code fragments, and to confirm how our method can help developers when developers add similar components. Based on the experimental results, we also discuss some improvement methods and provide the results from applications of these methods.

  • Design Study of Domain Decomposition Operation in Dataflow Architecture FDTD/FIT Dedicated Computer

    Hideki KAWAGUCHI  

     
    PAPER-Electromagnetic Theory

      Vol:
    E101-C No:1
      Page(s):
    20-25

    To aim to achieve a high-performance computation for microwave simulations with low cost, small size machine and low energy consumption, a method of the FDTD dedicated computer has been investigated. It was shown by VHDL logical circuit simulations that the FDTD dedicated computer with a dataflow architecture has much higher performance than that of high-end PC and GPU. Then the remaining task of this work is large scale computations by the dedicated computer, since microwave simulations for only 18×18×Z grid space (Z is the number of girds for z direction) can be executed in a single FPGA at most. To treat much larger numerical model size for practical applications, this paper considers an implementation of a domain decomposition method operation of the FDTD dedicated computer in a single FPGA.

  • On the Design Rationale of SIMON Block Cipher: Integral Attacks and Impossible Differential Attacks against SIMON Variants

    Kota KONDO  Yu SASAKI  Yosuke TODO  Tetsu IWATA  

     
    PAPER

      Vol:
    E101-A No:1
      Page(s):
    88-98

    SIMON is a lightweight block cipher designed by NSA in 2013. NSA presented the specification and the implementation efficiency, but they did not provide detailed security analysis nor the design rationale. The original SIMON has rotation constants of (1,8,2), and Kölbl et al. regarded the constants as a parameter (a,b,c), and analyzed the security of SIMON block cipher variants against differential and linear attacks for all the choices of (a,b,c). This paper complements the result of Kölbl et al. by considering integral and impossible differential attacks. First, we search the number of rounds of integral distinguishers by using a supercomputer. Our search algorithm follows the previous approach by Wang et al., however, we introduce a new choice of the set of plaintexts satisfying the integral property. We show that the new choice indeed extends the number of rounds for several parameters. We also search the number of rounds of impossible differential characteristics based on the miss-in-the-middle approach. Finally, we make a comparison of all parameters from our results and the observations by Kölbl et al. Interesting observations are obtained, for instance we find that the optimal parameters with respect to the resistance against differential attacks are not stronger than the original parameter with respect to integral and impossible differential attacks. Furthermore, we consider the security against differential attacks by considering differentials. From the result, we obtain a parameter that is potential to be better than the original parameter with respect to security against these four attacks.

  • An Extreme Learning Machine Architecture Based on Volterra Filtering and PCA

    Li CHEN  Ling YANG  Juan DU  Chao SUN  Shenglei DU  Haipeng XI  

     
    PAPER-Information Network

      Pubricized:
    2017/08/02
      Vol:
    E100-D No:11
      Page(s):
    2690-2701

    Extreme learning machine (ELM) has recently attracted many researchers' interest due to its very fast learning speed, good generalization ability, and ease of implementation. However, it has a linear output layer which may limit the capability of exploring the available information, since higher-order statistics of the signals are not taken into account. To address this, we propose a novel ELM architecture in which the linear output layer is replaced by a Volterra filter structure. Additionally, the principal component analysis (PCA) technique is used to reduce the number of effective signals transmitted to the output layer. This idea not only improves the processing capability of the network, but also preserves the simplicity of the training process. Then we carry out performance evaluation and application analysis for the proposed architecture in the context of supervised classification and unsupervised equalization respectively, and the obtained results either on publicly available datasets or various channels, when compared to those produced by already proposed ELM versions and a state-of-the-art algorithm: support vector machine (SVM), highlight the adequacy and the advantages of the proposed architecture and characterize it as a promising tool to deal with signal processing tasks.

  • High-Speed 3-D Electroholographic Movie Playback Using a Digital Micromirror Device Open Access

    Naoki TAKADA  Masato FUJIWARA  ChunWei OOI  Yuki MAEDA  Hirotaka NAKAYAMA  Takashi KAKUE  Tomoyoshi SHIMOBABA  Tomoyoshi ITO  

     
    INVITED PAPER

      Vol:
    E100-C No:11
      Page(s):
    978-983

    This study involves proposing a high-speed computer-generated hologram playback by using a digital micromirror device for high-definition spatiotemporal division multiplexing electroholography. Consequently, the results indicated that the study successfully reconstructed a high-definition 3-D movie of 3-D objects that was comprised of approximately 900,000 points at 60 fps when each frame was divided into twelve parts.

  • A Novel Component Ranking Method for Improving Software Reliability

    Lixing XUE  Decheng ZUO  Zhan ZHANG  Na WU  

     
    LETTER-Dependable Computing

      Pubricized:
    2017/07/24
      Vol:
    E100-D No:10
      Page(s):
    2653-2658

    This paper proposes a component ranking method to identify important components which have great impact on the system reliability. This method, which is opposite to an existing method, believes components which frequently invoke other components have more impact than others and employs component invocation structures and invocation frequencies for making important component ranking. It can strongly support for improving the reliability of software systems, especially large-scale systems. Extensive experiments are provided to validate this method and draw performance comparison.

  • A Joint Interference Suppression and Multiuser Detection Scheme Based on Eigendecomposition for Three-Cell Multiple Relay Systems

    Ahmet Ihsan CANBOLAT  Kazuhiko FUKAWA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/03/10
      Vol:
    E100-B No:10
      Page(s):
    1939-1945

    To suppress intercell interference for three-cell half-duplex relay systems, joint interference suppression and multiuser detection (MUD) schemes that estimate weight coefficients by the recursive least-squares (RLS) algorithm have been proposed but show much worse bit error rate (BER) performance than maximum likelihood detection (MLD). To improve the BER performance, this paper proposes a joint interference suppression and MUD scheme that estimates the weight coefficients by eigenvalue decomposition. The proposed scheme carries the same advantages as the conventional RLS based schemes; it does not need channel state information (CSI) feedback while incurring much less amount of computational complexity than MLD. In addition, it needs to know only two out of three preambles used in the system. Computer simulations of orthogonal frequency-division multiplexing (OFDM) transmission under three-cell and frequency selective fading conditions are conducted. It is shown that the eigendecomposition-based scheme overwhelmingly outperforms the conventional RLS-based scheme although requiring higher computational complexity.

  • Two Classes of Optimal Constant Composition Codes from Zero Difference Balanced Functions

    Bing LIU  Xia LI  Feng CHENG  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:10
      Page(s):
    2183-2186

    Constant composition codes (CCCs) are a special class of constant-weight codes. They include permutation codes as a subclass. The study and constructions of CCCs with parameters meeting certain bounds have been an interesting research subject in coding theory. A bridge from zero difference balanced (ZDB) functions to CCCs with parameters meeting the Luo-Fu-Vinck-Chen bound has been established by Ding (IEEE Trans. Information Theory 54(12) (2008) 5766-5770). This provides a new approach for obtaining optimal CCCs. The objective of this letter is to construct two classes of ZDB functions whose parameters not covered in the literature, and then obtain two classes of optimal CCCs meeting the Luo-Fu-Vinck-Chen bound from these new ZDB functions.

  • Enhancing Purchase Behavior Prediction with Temporally Popular Items

    Chen CHEN  Chunyan HOU  Jiakun XIAO  Yanlong WEN  Xiaojie YUAN  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/05/30
      Vol:
    E100-D No:9
      Page(s):
    2237-2240

    In the era of e-commerce, purchase behavior prediction is one of the most important issues to promote both online companies' sales and the consumers' experience. The previous researches usually use traditional features based on the statistics and temporal dynamics of items. Those features lead to the loss of detailed items' information. In this study, we propose a novel kind of features based on temporally popular items to improve the prediction. Experiments on the real-world dataset have demonstrated the effectiveness and the efficiency of our proposed method. Features based on temporally popular items are compared with traditional features which are associated with statistics, temporal dynamics and collaborative filter of items. We find that temporally popular items are an effective and irreplaceable supplement of traditional features. Our study shed light on the effectiveness of the combination of popularity and temporal dynamics of items which can widely used for a variety of recommendations in e-commerce sites.

  • A Study on Video Generation Based on High-Density Temporal Sampling

    Yukihiro BANDOH  Seishi TAKAMURA  Atsushi SHIMIZU  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2044-2047

    In current video encoding systems, the acquisition process is independent from the video encoding process. In order to compensate for the independence, pre-filters prior to the encoder are used. However, conventional pre-filters are designed under constraints on the temporal resolution, so they are not optimized enough in terms of coding efficiency. By relaxing the restriction on the temporal resolution of current video encoding systems, there is a good possibility to generate a video signal suitable for the video encoding process. This paper proposes a video generation method with an adaptive temporal filter that utilizes a temporally over-sampled signal. The filter is designed based on dynamic-programming. Experimental results show that the proposed method can reduce encoding rate on average by 3.01 [%] compared to the constant mean filter.

  • Hole-Filling Algorithm with Spatio-Temporal Background Information for View Synthesis

    Huu-Noi DOAN  Tien-Dat NGUYEN  Min-Cheol HONG  

     
    PAPER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    1994-2004

    This paper presents a new hole-filling method that uses extrapolated spatio-temporal background information to obtain a synthesized free-view. A new background codebook for extracting reliable temporal background information is introduced. In addition, the paper addresses estimating spatial local background to distinguish background and foreground regions so that spatial background information can be extrapolated. Background holes are filled by combining spatial and temporal background information. Finally, exemplar-based inpainting is applied to fill in the remaining holes using a new priority function. The experimental results demonstrated that satisfactory synthesized views can be obtained using the proposed algorithm.

  • Iteration-Free Bi-Dimensional Empirical Mode Decomposition and Its Application

    Taravichet TITIJAROONROJ  Kuntpong WORARATPANYA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/06/19
      Vol:
    E100-D No:9
      Page(s):
    2183-2196

    A bi-dimensional empirical mode decomposition (BEMD) is one of the powerful methods for decomposing non-linear and non-stationary signals without a prior function. It can be applied in many applications such as feature extraction, image compression, and image filtering. Although modified BEMDs are proposed in several approaches, computational cost and quality of their bi-dimensional intrinsic mode function (BIMF) still require an improvement. In this paper, an iteration-free computation method for bi-dimensional empirical mode decomposition, called iBEMD, is proposed. The locally partial correlation for principal component analysis (LPC-PCA) is a novel technique to extract BIMFs from an original signal without using extrema detection. This dramatically reduces the computation time. The LPC-PCA technique also enhances the quality of BIMFs by reducing artifacts. The experimental results, when compared with state-of-the-art methods, show that the proposed iBEMD method can achieve the faster computation of BIMF extraction and the higher quality of BIMF image. Furthermore, the iBEMD method can clearly remove an illumination component of nature scene images under illumination change, thereby improving the performance of text localization and recognition.

141-160hit(945hit)