The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E101-D No.10  (Publication Date:2018/10/01)

    Regular Section
  • Improving Per-Node Computing Efficiency by an Adaptive Lock-Free Scheduling Model

    Zhishuo ZHENG  Deyu QI  Naqin ZHOU  Xinyang WANG  Mincong YU  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/07/06
      Page(s):
    2423-2435

    Job scheduling on many-core computers with tens or even hundreds of processing cores is one of the key technologies in High Performance Computing (HPC) systems. Despite many scheduling algorithms have been proposed, scheduling remains a challenge for executing highly effective jobs that are assigned in a single computing node with diverse scheduling objectives. On the other hand, the increasing scale and the need for rapid response to changing requirements are hard to meet with existing scheduling models in an HPC node. To address these issues, we propose a novel adaptive scheduling model that is applied to a single node with a many-core processor; this model solves the problems of scheduling efficiency and scalability through an adaptive optimistic control mechanism. This mechanism exposes information such that all the cores are provided with jobs and the tools necessary to take advantage of that information and thus compete for resources in an uncoordinated manner. At the same time, the mechanism is equipped with adaptive control, allowing it to adjust the number of running tools dynamically when frequent conflict happens. We justify this scheduling model and present the simulation results for synthetic and real-world HPC workloads, in which we compare our proposed model with two widely used scheduling models, i.e. multi-path monolithic and two-level scheduling. The proposed approach outperforms the other models in scheduling efficiency and scalability. Our results demonstrate that the adaptive optimistic control affords significant improvements for HPC workloads in the parallelism of the node-level scheduling model and performance.

  • Spectrum-Based Fault Localization Using Fault Triggering Model to Refine Fault Ranking List

    Yong WANG  Zhiqiu HUANG  Rongcun WANG  Qiao YU  

     
    PAPER-Software Engineering

      Pubricized:
    2018/07/04
      Page(s):
    2436-2446

    Spectrum-based fault localization (SFL) is a lightweight approach, which aims at helping debuggers to identity root causes of failures by measuring suspiciousness for each program component being a fault, and generate a hypothetical fault ranking list. Although SFL techniques have been shown to be effective, the fault component in a buggy program cannot always be ranked at the top due to its complex fault triggering models. However, it is extremely difficult to model the complex triggering models for all buggy programs. To solve this issue, we propose two simple fault triggering models (RIPRα and RIPRβ), and a refinement technique to improve fault absolute ranking based on the two fault triggering models, through ruling out some higher ranked components according to its fault triggering model. Intuitively, our approach is effective if a fault component was ranked within top k in the two fault ranking lists outputted by the two fault localization strategies. Experimental results show that our approach can significantly improve the fault absolute ranking in the three cases.

  • Uncertain Rule Based Method for Determining Data Currency

    Mohan LI  Jianzhong LI  Siyao CHENG  Yanbin SUN  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2018/07/10
      Page(s):
    2447-2457

    Currency is one of the important measurements of data quality. The main purpose of the study on data currency is to determine whether a given data item is up-to-date. Though there are already several works on determining data currency, all the proposed methods have limitations. Some works require timestamps of data items that are not always available, and others are based on certain currency rules that can only decide relevant currency and cannot express uncertain semantics. To overcome the limitations of the previous methods, this paper introduces a new approach for determining data currency based on uncertain currency rules. First, a class of uncertain currency rules is provided to infer the possible valid time for a given data item, and then based on the rules, data currency is formally defined. After that, a polynomial time algorithm for evaluating data currency is given based on the uncertain currency rules. Using real-life data sets, the effectiveness and efficiency of the proposed method are experimentally verified.

  • MinDoS: A Priority-Based SDN Safe-Guard Architecture for DoS Attacks

    Tao WANG  Hongchang CHEN  Chao QI  

     
    PAPER-Information Network

      Pubricized:
    2018/05/02
      Page(s):
    2458-2464

    Software-defined networking (SDN) has rapidly emerged as a promising new technology for future networks and gained considerable attention from both academia and industry. However, due to the separation between the control plane and the data plane, the SDN controller can easily become the target of denial-of service (DoS) attacks. To mitigate DoS attacks in OpenFlow networks, our solution, MinDoS, contains two key techniques/modules: the simplified DoS detection module and the priority manager. The proposed architecture sends requests into multiple buffer queues with different priorities and then schedules the processing of these flow requests to ensure better controller protection. The results show that MinDoS is effective and adds only minor overhead to the entire SDN/OpenFlow infrastructure.

  • Client-Side Evil Twin Attacks Detection Using Statistical Characteristics of 802.11 Data Frames

    Qian LU  Haipeng QU  Yuan ZHUANG  Xi-Jun LIN  Yuzhan OUYANG  

     
    PAPER-Information Network

      Pubricized:
    2018/07/02
      Page(s):
    2465-2473

    With the development of wireless network technology and popularization of mobile devices, the Wireless Local Area Network (WLAN) has become an indispensable part of our daily life. Although the 802.11-based WLAN provides enormous convenience for users to access the Internet, it also gives rise to a number of security issues. One of the most severe threat encountered by Wi-Fi users is the evil twin attacks. The evil twin, a kind of rogue access points (RAPs), masquerades as a legitimate access point (AP) to lure users to connect it. Due to the characteristics of strong concealment, high confusion, great harmfulness and easy implementation, the evil twin has led to significant loss of sensitive information and become one of the most prominent security threats in recent years. In this paper, we propose a passive client-based detection solution that enables users to independently identify and locate evil twins without any assistance from a wireless network administrator. Because of the forwarding behavior of evil twins, proposed method compares 802.11 data frames sent by target APs to users to determine evil twin attacks. We implemented our detection technique in a Python tool named ET-spotter. Through implementation and evaluation in our study, our algorithm achieves 96% accuracy in distinguishing evil twins from legitimate APs.

  • Weighting Estimation Methods for Opponents' Utility Functions Using Boosting in Multi-Time Negotiations

    Takaki MATSUNE  Katsuhide FUJITA  

     
    PAPER-Information Network

      Pubricized:
    2018/07/10
      Page(s):
    2474-2484

    Recently, multi-issue closed negotiations have attracted attention in multi-agent systems. In particular, multi-time and multilateral negotiation strategies are important topics in multi-issue closed negotiations. In multi-issue closed negotiations, an automated negotiating agent needs to have strategies for estimating an opponent's utility function by learning the opponent's behaviors since the opponent's utility information is not open to others. However, it is difficult to estimate an opponent's utility function for the following reasons: (1) Training datasets for estimating opponents' utility functions cannot be obtained. (2) It is difficult to apply the learned model to different negotiation domains and opponents. In this paper, we propose a novel method of estimating the opponents' utility functions using boosting based on the least-squares method and nonlinear programming. Our proposed method weights each utility function estimated by several existing utility function estimation methods and outputs improved utility function by summing each weighted function. The existing methods using boosting are based on the frequency-based method, which counts the number of values offered, considering the time elapsed when they offered. Our experimental results demonstrate that the accuracy of estimating opponents' utility functions is significantly improved under various conditions compared with the existing utility function estimation methods without boosting.

  • Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers

    Hyun KWON  Yongchul KIM  Ki-Woong PARK  Hyunsoo YOON  Daeseon CHOI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/07/06
      Page(s):
    2485-2500

    Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.

  • Individuality-Preserving Gait Pattern Prediction Based on Gait Feature Transitions

    Tsuyoshi HIGASHIGUCHI  Norimichi UKITA  Masayuki KANBARA  Norihiro HAGITA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2018/07/20
      Page(s):
    2501-2508

    This paper proposes a method for predicting individuality-preserving gait patterns. Physical rehabilitation can be performed using visual and/or physical instructions by physiotherapists or exoskeletal robots. However, a template-based rehabilitation may produce discomfort and pain in a patient because of deviations from the natural gait of each patient. Our work addresses this problem by predicting an individuality-preserving gait pattern for each patient. In this prediction, the transition of the gait patterns is modeled by associating the sequence of a 3D skeleton in gait with its continuous-value gait features (e.g., walking speed or step width). In the space of the prediction model, the arrangement of the gait patterns are optimized so that (1) similar gait patterns are close to each other and (2) the gait feature changes smoothly between neighboring gait patterns. This model allows to predict individuality-preserving gait patterns of each patient even if his/her various gait patterns are not available for prediction. The effectiveness of the proposed method is demonstrated quantitatively. with two datasets.

  • Finding Important People in a Video Using Deep Neural Networks with Conditional Random Fields

    Mayu OTANI  Atsushi NISHIDA  Yuta NAKASHIMA  Tomokazu SATO  Naokazu YOKOYA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/20
      Page(s):
    2509-2517

    Finding important regions is essential for applications, such as content-aware video compression and video retargeting to automatically crop a region in a video for small screens. Since people are one of main subjects when taking a video, some methods for finding important regions use a visual attention model based on face/pedestrian detection to incorporate the knowledge that people are important. However, such methods usually do not distinguish important people from passers-by and bystanders, which results in false positives. In this paper, we propose a deep neural network (DNN)-based method, which classifies a person into important or unimportant, given a video containing multiple people in a single frame and captured with a hand-held camera. Intuitively, important/unimportant labels are highly correlated given that corresponding people's spatial motions are similar. Based on this assumption, we propose to boost the performance of our important/unimportant classification by using conditional random fields (CRFs) built upon the DNN, which can be trained in an end-to-end manner. Our experimental results show that our method successfully classifies important people and the use of a DNN with CRFs improves the accuracy.

  • RbWL: Recency-Based Static Wear Leveling for Lifetime Extension and Overhead Reduction in NAND Flash Memory Systems

    Sang-Ho HWANG  Jong Wook KWAK  

     
    LETTER-Software System

      Pubricized:
    2018/07/09
      Page(s):
    2518-2522

    In this letter, we propose a static wear leveling technique, called Recency-based Wear Leveling (RbWL). The basic idea of RbWL is to execute static wear leveling at minimum levels, because the frequent migrations of cold data by static wear leveling cause significant overhead in a NAND flash memory system. RbWL adjusts the execution frequency according to a threshold value that reflects the lifetime difference of the hot/cold blocks and the total lifetime of the NAND flash memory system. The evaluation results show that RbWL improves the lifetime of NAND flash memory systems by 52%, and it also reduces the overhead of wear leveling from 8% to 42% and from 13% to 51%, in terms of the number of erase operations and the number of page migrations of valid pages, respectively, compared with other algorithms.

  • A Quantitative Analysis on Relationship between an Early-Closed Bug and Its Amount of Clues: A Case Study of Apache Ant

    Akito SUNOUCHI  Hirohisa AMAN  Minoru KAWAHARA  

     
    LETTER-Software Engineering

      Pubricized:
    2018/06/22
      Page(s):
    2523-2525

    Once a bug is reported, it is a major concern whether or not the bug is resolved (closed) soon. This paper examines seven metrics quantifying the amount of clues to the early close of reported bugs through a case study. The results show that one of the metrics, the similarity to already-closed bug reports, is strongly related to early-closed bugs.

  • Search-Based Concolic Execution for SW Vulnerability Discovery

    Rustamov FAYOZBEK  Minjun CHOI  Joobeom YUN  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2018/07/02
      Page(s):
    2526-2529

    Huge amounts of software appear nowadays. The more the number of software increases, the more increased software vulnerabilities are. Although some automatic methods have been proposed in order to detect and remove software vulnerabilities, they still require a lot of time so they have a limitation in the real world. To solve this problem, we propose BugHunter which automatically tests a binary file compiled with a C++ compiler. It searches for unsafe API calls and automatically executes to the program block that have an unsafe API call. Also, we showed that BugHunter is more efficient than angr through experiments. As a result, BugHunter is very helpful to find a software vulnerability in a short time.

  • Impact of Viewing Distance on Task Performance and Its Properties

    Makio ISHIHARA  Yukio ISHIHARA  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2018/07/02
      Page(s):
    2530-2533

    This paper discusses VDT syndrome from the point of view of the viewing distance between a computer screen and user's eyes. This paper conducts a series of experiments to show an impact of the viewing distance on task performance. In the experiments, two different viewing distances of 50cm and 350cm with the same viewing angle of 30degrees are taken into consideration. The results show that the long viewing distance enables people to manipulate the mouse more slowly, more correctly and more precisely than the short.

  • TS-ICNN: Time Sequence-Based Interval Convolutional Neural Networks for Human Action Detection and Recognition

    Zhendong ZHUANG  Yang XUE  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2018/07/20
      Page(s):
    2534-2538

    The research on inertial sensor based human action detection and recognition (HADR) is a new area in machine learning. We propose a novel time sequence based interval convolutional neutral networks framework for HADR by combining interesting interval proposals generator and interval-based classifier. Experiments demonstrate the good performance of our method.

  • Low Bit-Rate Compression Image Restoration through Subspace Joint Regression Learning

    Zongliang GAN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/06/28
      Page(s):
    2539-2542

    In this letter, an effective low bit-rate image restoration method is proposed, in which image denoising and subspace regression learning are combined. The proposed framework has two parts: image main structure estimation by classical NLM denoising and texture component prediction by subspace joint regression learning. The local regression function are learned from denoised patch to original patch in each subspace, where the corresponding compression image patches are employed to generate anchoring points by the dictionary learning approach. Moreover, we extent Extreme Support Vector Regression (ESVR) as multi-variable nonlinear regression to get more robustness results. Experimental results demonstrate the proposed method achieves favorable performance compared with other leading methods.

  • Standard-Compliant Multiple Description Image Coding Based on Convolutional Neural Networks

    Ting ZHANG  Huihui BAI  Mengmeng ZHANG  Yao ZHAO  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/07/19
      Page(s):
    2543-2546

    Multiple description (MD) coding is an attractive framework for robust information transmission over non-prioritized and unpredictable networks. In this paper, a novel MD image coding scheme is proposed based on convolutional neural networks (CNNs), which aims to improve the reconstructed quality of side and central decoders. For this purpose initially, a given image is encoded into two independent descriptions by sub-sampling. Such a design can make the proposed method compatible with the existing image coding standards. At the decoder, in order to achieve high-quality of side and central image reconstruction, three CNNs, including two side decoder sub-networks and one central decoder sub-network, are adopted into an end-to-end reconstruction framework. Experimental results show the improvement achieved by the proposed scheme in terms of both peak signal-to-noise ratio values and subjective quality. The proposed method demonstrates better rate central and side distortion performance.

  • Twofold Correlation Filtering for Tracking Integration

    Wei WANG  Weiguang LI  Zhaoming CHEN  Mingquan SHI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/10
      Page(s):
    2547-2550

    In general, effective integrating the advantages of different trackers can achieve unified performance promotion. In this work, we study the integration of multiple correlation filter (CF) trackers; propose a novel but simple tracking integration method that combines different trackers in filter level. Due to the variety of their correlation filter and features, there is no comparability between different CF tracking results for tracking integration. To tackle this, we propose twofold CF to unify these various response maps so that the results of different tracking algorithms can be compared, so as to boost the tracking performance like ensemble learning. Experiment of two CF methods integration on the data sets OTB demonstrates that the proposed method is effective and promising.

  • Improving Distantly Supervised Relation Extraction by Knowledge Base-Driven Zero Subject Resolution

    Eun-kyung KIM  Key-Sun CHOI  

     
    LETTER-Natural Language Processing

      Pubricized:
    2018/07/11
      Page(s):
    2551-2558

    This paper introduces a technique for automatically generating potential training data from sentences in which entity pairs are not apparently presented in a relation extraction. Most previous works on relation extraction by distant supervision ignored cases in which a relationship may be expressed via null-subjects or anaphora. However, natural language text basically has a network structure that is composed of several sentences. If they are closely related, this is not expressed explicitly in the text, which can make relation extraction difficult. This paper describes a new model that augments a paragraph with a “salient entity” that is determined without parsing. The entity can create additional tuple extraction environments as potential subjects in paragraphs. Including the salient entity as part of the sentential input may allow the proposed method to identify relationships that conventional methods cannot identify. This method also has promising potential applicability to languages for which advanced natural language processing tools are lacking.