The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

3401-3420hit(42807hit)

  • Towards Blockchain-Based Software-Defined Networking: Security Challenges and Solutions

    Wenjuan LI  Weizhi MENG  Zhiqiang LIU  Man-Ho AU  

     
    INVITED PAPER

      Pubricized:
    2019/11/08
      Vol:
    E103-D No:2
      Page(s):
    196-203

    Software-Defined Networking (SDN) enables flexible deployment and innovation of new networking applications by decoupling and abstracting the control and data planes. It has radically changed the concept and way of building and managing networked systems, and reduced the barriers to entry for new players in the service markets. It is considered to be a promising solution providing the scale and versatility necessary for IoT. However, SDN may also face many challenges, i.e., the centralized control plane would be a single point of failure. With the advent of blockchain technology, blockchain-based SDN has become an emerging architecture for securing a distributed network environment. Motivated by this, in this work, we summarize the generic framework of blockchain-based SDN, discuss security challenges and relevant solutions, and provide insights on the future development in this field.

  • Simple Black-Box Adversarial Examples Generation with Very Few Queries

    Yuya SENZAKI  Satsuya OHATA  Kanta MATSUURA  

     
    PAPER-Reliability and Security of Computer Systems

      Pubricized:
    2019/10/02
      Vol:
    E103-D No:2
      Page(s):
    212-221

    Research on adversarial examples for machine learning has received much attention in recent years. Most of previous approaches are white-box attacks; this means the attacker needs to obtain before-hand internal parameters of a target classifier to generate adversarial examples for it. This condition is hard to satisfy in practice. There is also research on black-box attacks, in which the attacker can only obtain partial information about target classifiers; however, it seems we can prevent these attacks, since they need to issue many suspicious queries to the target classifier. In this paper, we show that a naive defense strategy based on surveillance of number query will not suffice. More concretely, we propose to generate not pixel-wise but block-wise adversarial perturbations to reduce the number of queries. Our experiments show that such rough perturbations can confuse the target classifier. We succeed in reducing the number of queries to generate adversarial examples in most cases. Our simple method is an untargeted attack and may have low success rates compared to previous results of other black-box attacks, but needs in average fewer queries. Surprisingly, the minimum number of queries (one and three in MNIST and CIFAR-10 dataset, respectively) is enough to generate adversarial examples in some cases. Moreover, based on these results, we propose a detailed classification for black-box attackers and discuss countermeasures against the above attacks.

  • A Novel Structure-Based Data Sharing Scheme in Cloud Computing

    Huiyao ZHENG  Jian SHEN  Youngju CHO  Chunhua SU  Sangman MOH  

     
    PAPER-Reliability and Security of Computer Systems

      Pubricized:
    2019/11/15
      Vol:
    E103-D No:2
      Page(s):
    222-229

    Cloud computing is a unlimited computing resource and storing resource, which provides a lot of convenient services, for example, Internet and education, intelligent transportation system. With the rapid development of cloud computing, more and more people pay attention to reducing the cost of data management. Data sharing is a effective model to decrease the cost of individuals or companies in dealing with data. However, the existing data sharing scheme cannot reduce communication cost under ensuring the security of users. In this paper, an anonymous and traceable data sharing scheme is presented. The proposed scheme can protect the privacy of the user. In addition, the proposed scheme also can trace the user uploading irrelevant information. Security and performance analyses show that the data sharing scheme is secure and effective.

  • Study on the Vulnerabilities of Free and Paid Mobile Apps Associated with Software Library

    Takuya WATANABE  Mitsuaki AKIYAMA  Fumihiro KANEI  Eitaro SHIOJI  Yuta TAKATA  Bo SUN  Yuta ISHII  Toshiki SHIBAHARA  Takeshi YAGI  Tatsuya MORI  

     
    PAPER-Network Security

      Pubricized:
    2019/11/22
      Vol:
    E103-D No:2
      Page(s):
    276-291

    This paper reports a large-scale study that aims to understand how mobile application (app) vulnerabilities are associated with software libraries. We analyze both free and paid apps. Studying paid apps was quite meaningful because it helped us understand how differences in app development/maintenance affect the vulnerabilities associated with libraries. We analyzed 30k free and paid apps collected from the official Android marketplace. Our extensive analyses revealed that approximately 70%/50% of vulnerabilities of free/paid apps stem from software libraries, particularly from third-party libraries. Somewhat paradoxically, we found that more expensive/popular paid apps tend to have more vulnerabilities. This comes from the fact that more expensive/popular paid apps tend to have more functionality, i.e., more code and libraries, which increases the probability of vulnerabilities. Based on our findings, we provide suggestions to stakeholders of mobile app distribution ecosystems.

  • New Pseudo-Random Number Generator for EPC Gen2

    Hiroshi NOMAGUCHI  Chunhua SU  Atsuko MIYAJI  

     
    PAPER-Cryptographic Techniques

      Pubricized:
    2019/11/14
      Vol:
    E103-D No:2
      Page(s):
    292-298

    RFID enable applications are ubiquitous in our society, especially become more and more important as IoT management rises. Meanwhile, the concern of security and privacy of RFID is also increasing. The pseudorandom number generator is one of the core primitives to implement RFID security. Therefore, it is necessary to design and implement a secure and robust pseudo-random number generator (PRNG) for current RFID tag. In this paper, we study the security of light-weight PRNGs for EPC Gen2 RFID tag which is an EPC Global standard. For this reason, we have analyzed and improved the existing research at IEEE TrustCom 2017 and proposed a model using external random numbers. However, because the previous model uses external random numbers, the speed has a problem depending on the generation speed of external random numbers. In order to solve this problem, we developed a pseudorandom number generator that does not use external random numbers. This model consists of LFSR, NLFSR and SLFSR. Safety is achieved by using nonlinear processing such as multiplication and logical multiplication on the Galois field. The cycle achieves a cycle longer than the key length by effectively combining a plurality of LFSR and the like. We show that our proposal PRNG has good randomness and passed the NIST randomness test. We also shows that it is resistant to identification attacks and GD attacks.

  • Virtual Address Remapping with Configurable Tiles in Image Processing Applications

    Jae Young HUR  

     
    PAPER-Computer System

      Pubricized:
    2019/10/17
      Vol:
    E103-D No:2
      Page(s):
    309-320

    The conventional linear or tiled address maps can degrade performance and memory utilization when traffic patterns are not matched with an underlying address map. The address map is usually fixed at design time. Accordingly, it is difficult to adapt to given applications. Modern embedded system usually accommodates memory management units (MMUs). As a result, depending on virtual address patterns, the system can suffer from performance overheads due to page table walks. To alleviate this performance overhead, we propose to cluster and rearrange tiles to construct an MMU-aware configurable address map. To construct the clustered tiled map, the generic tile number remapping algorithm is presented. In the presented scheme, an address map is configured based on the adaptive dimensioning algorithm. Considering image processing applications, a design, an analysis, an implementation, and simulations are conducted. The results indicate the proposed method can improve the performance and the memory utilization with moderate hardware costs.

  • Efficient Methods to Generate Constant SNs with Considering Trade-Off between Error and Overhead and Its Evaluation

    Yudai SAKAMOTO  Shigeru YAMASHITA  

     
    PAPER-Computer System

      Pubricized:
    2019/11/12
      Vol:
    E103-D No:2
      Page(s):
    321-328

    In Stochastic Computing (SC), we need to generate many stochastic numbers (SNs). If we generate one SN conventionally, we need a Stochastic Number Generator (SNG) which consists of a linear-feedback shift register (LFSR) and a comparator. When we calculate an arithmetic function by SC, we need to generate many SNs whose values are equal to constant values used in the arithmetic function. As a consequence, the hardware overhead becomes huge. Accordingly, there has been proposed a method called GMCS (Generating Many Constant SNs from Few SNs) to generate many constant SNs with low hardware overhead. However, if we use GMCS simply, generated constant SNs are correlated highly with each other. This would be a serious problem because the high correlation of SNs make a large error in computation. Therefore, in this paper, we propose efficient methods to generate constant SNs with reasonably low hardware overhead without increasing errors. To reduce the correlations of constant SNs which are generated by GMCS, we use Register based Re-arrangement circuit using a Random bit stream duplicator (RRRD). RRRDs have few influences on the hardware overhead because an RRRD consists of three multiplexers (MUXs) and two 1-bit FFs. We also use a technique to share random number generators with several SNGs to reduce the hardware overhead. We provide some experimental results by which we can confirm that our proposed methods are in general very useful to reduce the hardware overhead for generating constant SNs without increasing errors.

  • A Release-Aware Bug Triaging Method Considering Developers' Bug-Fixing Loads

    Yutaro KASHIWA  Masao OHIRA  

     
    PAPER-Software Engineering

      Pubricized:
    2019/10/25
      Vol:
    E103-D No:2
      Page(s):
    348-362

    This paper proposes a release-aware bug triaging method that aims to increase the number of bugs that developers can fix by the next release date during open-source software development. A variety of methods have been proposed for recommending appropriate developers for particular bug-fixing tasks, but since these approaches only consider the developers' ability to fix the bug, they tend to assign many of the bugs to a small number of the project's developers. Since projects generally have a release schedule, even excellent developers cannot fix all the bugs that are assigned to them by the existing methods. The proposed method places an upper limit on the number of tasks which are assigned to each developer during a given period, in addition to considering the ability of developers. Our method regards the bug assignment problem as a multiple knapsack problem, finding the best combination of bugs and developers. The best combination is one that maximizes the efficiency of the project, while meeting the constraint where it can only assign as many bugs as the developers can fix during a given period. We conduct the case study, applying our method to bug reports from Mozilla Firefox, Eclipse Platform and GNU compiler collection (GCC). We find that our method has the following properties: (1) it can prevent the bug-fixing load from being concentrated on a small number of developers; (2) compared with the existing methods, the proposed method can assign a more appropriate amount of bugs that each developer can fix by the next release date; (3) it can reduce the time taken to fix bugs by 35%-41%, compared with manual bug triaging;

  • Formal Verification of a Decision-Tree Ensemble Model and Detection of Its Violation Ranges

    Naoto SATO  Hironobu KURUMA  Yuichiroh NAKAGAWA  Hideto OGAWA  

     
    PAPER-Dependable Computing

      Pubricized:
    2019/11/20
      Vol:
    E103-D No:2
      Page(s):
    363-378

    As one type of machine-learning model, a “decision-tree ensemble model” (DTEM) is represented by a set of decision trees. A DTEM is mainly known to be valid for structured data; however, like other machine-learning models, it is difficult to train so that it returns the correct output value (called “prediction value”) for any input value (called “attribute value”). Accordingly, when a DTEM is used in regard to a system that requires reliability, it is important to comprehensively detect attribute values that lead to malfunctions of a system (failures) during development and take appropriate countermeasures. One conceivable solution is to install an input filter that controls the input to the DTEM and to use separate software to process attribute values that may lead to failures. To develop the input filter, it is necessary to specify the filtering condition for the attribute value that leads to the malfunction of the system. In consideration of that necessity, we propose a method for formally verifying a DTEM and, according to the result of the verification, if an attribute value leading to a failure is found, extracting the range in which such an attribute value exists. The proposed method can comprehensively extract the range in which the attribute value leading to the failure exists; therefore, by creating an input filter based on that range, it is possible to prevent the failure. To demonstrate the feasibility of the proposed method, we performed a case study using a dataset of house prices. Through the case study, we also evaluated its scalability and it is shown that the number and depth of decision trees are important factors that determines the applicability of the proposed method.

  • Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition

    Chihiro WATANABE  Kaoru HIRAMATSU  Kunio KASHINO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/10/23
      Vol:
    E103-D No:2
      Page(s):
    390-397

    Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.

  • Automatic Construction of a Large-Scale Speech Recognition Database Using Multi-Genre Broadcast Data with Inaccurate Subtitle Timestamps

    Jeong-Uk BANG  Mu-Yeol CHOI  Sang-Hun KIM  Oh-Wook KWON  

     
    PAPER-Speech and Hearing

      Pubricized:
    2019/11/13
      Vol:
    E103-D No:2
      Page(s):
    406-415

    As deep learning-based speech recognition systems are spotlighted, the need for large-scale speech databases for acoustic model training is increasing. Broadcast data can be easily used for database construction, since it contains transcripts for the hearing impaired. However, the subtitle timestamps have not been used to extract speech data because they are often inaccurate due to the inherent characteristics of closed captioning. Thus, we propose to build a large-scale speech database from multi-genre broadcast data with inaccurate subtitle timestamps. The proposed method first extracts the most likely speech intervals by removing subtitle texts with low subtitle quality index, concatenating adjacent subtitle texts into a merged subtitle text, and adding a margin to the timestamp of the merged subtitle text. Next, a speech recognizer is used to extract a hypothesis text of a speech segment corresponding to the merged subtitle text, and then the hypothesis text obtained from the decoder is recursively aligned with the merged subtitle text. Finally, the speech database is constructed by selecting the sub-parts of the merged subtitle text that match the hypothesis text. Our method successfully refines a large amount of broadcast data with inaccurate subtitle timestamps, taking about half of the time compared with the previous methods. Consequently, our method is useful for broadcast data processing, where bulk speech data can be collected every hour.

  • Constant-Q Deep Coefficients for Playback Attack Detection

    Jichen YANG  Longting XU  Bo REN  

     
    LETTER-Speech and Hearing

      Pubricized:
    2019/11/14
      Vol:
    E103-D No:2
      Page(s):
    464-468

    Under the framework of traditional power spectrum based feature extraction, in order to extract more discriminative information for playback attack detection, this paper proposes a feature by making use of deep neural network to describe the nonlinear relationship between power spectrum and discriminative information. Namely, constant-Q deep coefficients (CQDC). It relies on constant-Q transform, deep neural network and discrete cosine transform. In which, constant-Q transform is used to convert signal from the time domain into the frequency domain because it is a long-term transform that can provide more frequency detail, deep neural network is used to extract more discriminative information to discriminate playback speech from genuine speech and discrete cosine transform is used to decorrelate among the feature dimensions. ASVspoof 2017 corpus version 2.0 is used to evaluate the performance of CQDC. The experimental results show that CQDC outperforms the existing power spectrum obtained from constant-Q transform based features, and equal error can reduce from 19.18% to 51.56%. In addition, we found that discriminative information of CQDC hides in all frequency bins, which is different from commonly used features.

  • Simplified Triangular Partitioning Mode in Versatile Video Coding

    Dohyeon PARK  Jinho LEE  Jung-Won KANG  Jae-Gon KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2019/10/30
      Vol:
    E103-D No:2
      Page(s):
    472-475

    The emerging Versatile Video Coding (VVC) standard currently adopts Triangular Partitioning Mode (TPM) to make more flexible inter prediction. Due to the motion search and motion storage for TPM, the complexity of the encoder and decoder is significantly increased. This letter proposes two simplifications of TPM for reducing the complexity of the current design. One simplification is to reduce the number of combinations of motion vectors for both partitions to be checked. The method gives 4% encoding time decrease with negligible BD-rate loss. Another one is to remove the reference picture remapping process in the motion vector storage of TPM. It reduces the complexity of the encoder and decoder without a BD-rate change for the random-access configuration.

  • Follow Your Silhouette: Identifying the Social Account of Website Visitors through User-Blocking Side Channel

    Takuya WATANABE  Eitaro SHIOJI  Mitsuaki AKIYAMA  Keito SASAOKA  Takeshi YAGI  Tatsuya MORI  

     
    PAPER-Network Security

      Pubricized:
    2019/11/11
      Vol:
    E103-D No:2
      Page(s):
    239-255

    This paper presents a practical side-channel attack that identifies the social web service account of a visitor to an attacker's website. Our attack leverages the widely adopted user-blocking mechanism, abusing its inherent property that certain pages return different web content depending on whether a user is blocked from another user. Our key insight is that an account prepared by an attacker can hold an attacker-controllable binary state of blocking/non-blocking with respect to an arbitrary user on the same service; provided that the user is logged in to the service, this state can be retrieved as one-bit data through the conventional cross-site timing attack when a user visits the attacker's website. We generalize and refer to such a property as visibility control, which we consider as the fundamental assumption of our attack. Building on this primitive, we show that an attacker with a set of controlled accounts can gain a complete and flexible control over the data leaked through the side channel. Using this mechanism, we show that it is possible to design and implement a robust, large-scale user identification attack on a wide variety of social web services. To verify the feasibility of our attack, we perform an extensive empirical study using 16 popular social web services and demonstrate that at least 12 of these are vulnerable to our attack. Vulnerable services include not only popular social networking sites such as Twitter and Facebook, but also other types of web services that provide social features, e.g., eBay and Xbox Live. We also demonstrate that the attack can achieve nearly 100% accuracy and can finish within a sufficiently short time in a practical setting. We discuss the fundamental principles, practical aspects, and limitations of the attack as well as possible defenses. We have successfully addressed this attack by collaborative working with service providers and browser vendors.

  • Securing Cooperative Adaptive Cruise Control in Vehicular Platoons via Cooperative Message Authentication

    Na RUAN  Chunhua SU  Chi XIE  

     
    PAPER-Network Security

      Pubricized:
    2019/11/25
      Vol:
    E103-D No:2
      Page(s):
    256-264

    The requirement of safety, roadway capacity and efficiency in the vehicular network, which makes vehicular platoons concept continue to be of interest. For the authentication in vehicular platoons, efficiency and cooperation are the two most important things. Cooperative authentication is a way to recognize false identities and messages as well as saving resources. However, taking part in cooperative authentication makes the vehicle more vulnerable to privacy leakage which is commonly done by location tracking. Moreover, vehicles consume their resources when cooperating with others during the process of cooperation authentication. These two significant factors cause selfish behaviors of the vehicles not to participate in cooperate cooperation actively. In this paper, an infinitely repeated game for cooperative authentication in vehicular platoons is proposed to help analyze the utility of all nodes and point out the weakness of the current collaborative authentication protocol. To deal with this weakness, we also devised an enhanced cooperative authentication protocol based on mechanisms which makes it easier for vehicles to stay in the cooperate strategy rather than tend to selfish behavior. Meanwhile, our protocol can defense insider attacks.

  • Decentralized Supervisory Control of Timed Discrete Event Systems with Conditional Decisions for Enforcing Forcible Events

    Shimpei MIURA  Shigemasa TAKAI  

     
    PAPER

      Vol:
    E103-A No:2
      Page(s):
    417-427

    In this paper, we introduce conditional decisions for enforcing forcible events in the decentralized supervisory control framework for timed discrete event systems. We first present sufficient conditions for the existence of a decentralized supervisor with conditional decisions. These sufficient conditions are weaker than the necessary and sufficient conditions for the existence of a decentralized supervisor without conditional decisions. We next show that the presented sufficient conditions are also necessary under the assumption that if the occurrence of the event tick, which represents the passage of one time unit, is illegal, then a legal forcible event that should be forced to occur uniquely exists. In addition, we develop a method for verifying the presented conditions under the same assumption.

  • Statistical Analysis of Phase-Only Correlation Functions Between Two Signals with Stochastic Phase-Spectra Following Bivariate Circular Probability Distributions

    Shunsuke YAMAKI  Ryo SUZUKI  Makoto YOSHIZAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E103-A No:2
      Page(s):
    478-485

    This paper proposes statistical analysis of phase-only correlation functions between two signals with stochastic phase-spectra following bivariate circular probability distributions based on directional statistics. We give general expressions for the expectation and variance of phase-only correlation functions in terms of joint characteristic functions of the bivariate circular probability density function. In particular, if we assume bivariate wrapped distributions for the phase-spectra, we obtain exactly the same results between in case of a bivariate linear distribution and its corresponding bivariate wrapped distribution.

  • Users' Preference Prediction of Real Estate Properties Based on Floor Plan Analysis

    Naoki KATO  Toshihiko YAMASAKI  Kiyoharu AIZAWA  Takemi OHAMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/11/20
      Vol:
    E103-D No:2
      Page(s):
    398-405

    With the recent advances in e-commerce, it has become important to recommend not only mass-produced daily items, such as books, but also items that are not mass-produced. In this study, we present an algorithm for real estate recommendations. Automatic property recommendations are a highly difficult task because no identical properties exist in the world, occupied properties cannot be recommended, and users rent or buy properties only a few times in their lives. For the first step of property recommendation, we predict users' preferences for properties by combining content-based filtering and Multi-Layer Perceptron (MLP). In the MLP, we use not only attribute data of users and properties, but also deep features extracted from property floor plan images. As a result, we successfully predict users' preference with a Matthews Correlation Coefficient (MCC) of 0.166.

  • Adaptive HARQ Transmission of Polar Codes with a Common Information Set

    Hao LIANG  Aijun LIU  Heng WANG  Kui XU  

     
    LETTER-Coding Theory

      Vol:
    E103-A No:2
      Page(s):
    553-555

    This Letter explores the adaptive hybrid automatic repeat request (HARQ) using rate-compatible polar codes constructed with a common information set. The rate adaptation problem is formulated using Markov decision process and solved by a dynamic programming framework in a low-complexity way. Simulation verifies the throughput efficiency of the proposed adaptive HARQ.

  • Transmission Enhancement in Rectangular-Coordinate Orthogonal Multiplexing by Excitation Optimization of Slot Arrays for a Given Distance in the Non-Far Region Communication

    Ryotaro OHASHI  Takashi TOMURA  Jiro HIROKAWA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2019/08/22
      Vol:
    E103-B No:2
      Page(s):
    130-138

    This paper presents the excitation coefficient optimization of slot array antennas for increasing channel capacity in 2×2-mode two-dimensional ROM (rectangular coordinate orthogonal) transmission. Because the ROM transmission is for non-far region communication, the transmission between Tx (transmission) and Rx (reception) antennas increases when the antennas radiate beams inwardly. At first, we design the excitation coefficients of the slot arrays in order to enhance the transmission rate for a given transmission distance. Then, we fabricate monopulse corporate-feed waveguide slot array antennas that have the designed excitation amplitude and phase in the 60-GHz band for the 2×2-mode two-dimensional ROM transmission. The measured transmission between the fabricated Tx and Rx antennas increases at the given propagation distance and agrees with the simulation.

3401-3420hit(42807hit)