The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E100-D No.5  (Publication Date:2017/05/01)

    Special Section on the Architectures, Protocols, and Applications for the Future Internet
  • FOREWORD Open Access

    Takuo SUGANUMA  

     
    FOREWORD

      Page(s):
    931-931
  • NAPT-Based Mobility Service for Software Defined Networks Open Access

    Shimin SUN  Li HAN  Xianshu JIN  Sunyoung HAN  

     
    INVITED PAPER

      Pubricized:
    2017/02/13
      Page(s):
    932-938

    For IP-based mobile networks, efficient mobility management is vital to provision seamless online service. IP address starvation and scalability issue constrain the wide deployment of existing mobility schemes, such as Mobile IP, Proxy Mobile IP, and their derivations. Most of the studies focus on the scenario of mobility among public networks. However, most of current networks, such as home networks, sensor networks, and enterprise networks, are deployed with private networks hard to apply mobility solutions. With the rapid development, Software Defined Networking (SDN) offers the opportunity of innovation to support mobility in private network schemes. In this paper, a novel mobility management scheme is presented to support mobile node moving from public network to private network in a seamless handover procedure. The centralized control manner and flexible flow management in SDN are utilized to provide network-based mobility support with better QoS guarantee. Benefiting from SDN/OpenFlow technology, complex handover process is simplified with fewer message exchanges. Furthermore, handover efficiency can be improved in terms of delay and overhead reduction, scalability, and security. Analytical analysis and implementation results showed a better performance than mobile IP in terms of latency and throughput variation.

  • SDN-Based Self-Organizing Energy Efficient Downlink/Uplink Scheduling in Heterogeneous Cellular Networks Open Access

    Seungil MOON  Thant Zin OO  S. M. Ahsan KAZMI  Bang Ju PARK  Choong Seon HONG  

     
    INVITED PAPER

      Pubricized:
    2017/02/18
      Page(s):
    939-947

    The increase in network access devices and demand for high quality of service (QoS) by the users have led to insufficient capacity for the network operators. Moreover, the existing control equipment and mechanisms are not flexible and agile enough for the dynamically changing environment of heterogeneous cellular networks (HetNets). This non-agile control plane is hard to scale with ever increasing traffic demand and has become the performance bottleneck. Furthermore, the new HetNet architecture requires tight coordination and cooperation for the densely deployed small cell base stations, particularly for interference mitigation and dynamic frequency reuse and sharing. These issues further complicate the existing control plane and can cause serious inefficiencies in terms of users' quality of experience and network performance. This article presents an SDN control framework for energy efficient downlink/uplink scheduling in HetNets. The framework decouples the control plane from data plane by means of a logically centralized controller with distributed agents implemented in separate entities of the network (users and base stations). The scheduling problem consists of three sub-problems: (i) user association, (ii) power control, (iii) resource allocation and (iv) interference mitigation. Moreover, these sub-problems are coupled and must be solved simultaneously. We formulate the DL/UL scheduling in HetNet as an optimization problem and use the Markov approximation framework to propose a distributed economical algorithm. Then, we divide the algorithm into three sub-routines for (i) user association, (ii) power control, (iii) resource allocation and (iv) interference mitigation. These sub-routines are then implemented on different agents of the SDN framework. We run extensive simulation to validate our proposal and finally, present the performance analysis.

  • Bufferless Bidirectional Multi-Ring Networks with Sharing an Optical Burst Mode Transceiver for Any Route

    Kyota HATTORI  Masahiro NAKAGAWA  Toshiya MATSUDA  Masaru KATAYAMA  Katsutoshi KODA  

     
    PAPER

      Pubricized:
    2017/02/08
      Page(s):
    948-962

    Improvement of conventional networks with an incremental approach is an important design method for the development of the future internet. For this approach, we are developing a future aggregation network based on passive optical network (PON) technology to achieve both cost-effectiveness and high reliability. In this paper, we propose a timeslot (TS) synchronization method for sharing a TS from an optical burst mode transceiver between any route of arbitrary fiber length by changing both the route of the TS transmission and the TS control timing on the optical burst mode transceiver. We show the effectiveness of the proposed method for exchanging TSs in bidirectional bufferless wavelength division multiplexing (WDM) and time division multiplexing (TDM) multi-ring networks under the condition of the occurrence of a link failure through prototype systems. Also, we evaluate the reduction of the required number of optical interfaces in a multi-ring network by applying the proposed method.

  • Simulation Study of Low Latency Network Architecture Using Mobile Edge Computing

    Krittin INTHARAWIJITR  Katsuyoshi IIDA  Hiroyuki KOGA  

     
    PAPER

      Pubricized:
    2017/02/08
      Page(s):
    963-972

    Attaining extremely low latency service in 5G cellular networks is an important challenge in the communication research field. A higher QoS in the next-generation network could enable several unprecedented services, such as Tactile Internet, Augmented Reality, and Virtual Reality. However, these services will all need support from powerful computational resources provided through cloud computing. Unfortunately, the geolocation of cloud data centers could be insufficient to satisfy the latency aimed for in 5G networks. The physical distance between servers and users will sometimes be too great to enable quick reaction within the service time boundary. The problem of long latency resulting from long communication distances can be solved by Mobile Edge Computing (MEC), though, which places many servers along the edges of networks. MEC can provide shorter communication latency, but total latency consists of both the transmission and the processing times. Always selecting the closest edge server will lead to a longer computing latency in many cases, especially when there is a mass of users around particular edge servers. Therefore, the research studies the effects of both latencies. The communication latency is represented by hop count, and the computation latency is modeled by processor sharing (PS). An optimization model and selection policies are also proposed. Quantitative evaluations using simulations show that selecting a server according to the lowest total latency leads to the best performance, and permitting an over-latency barrier would further improve results.

  • Achieving Scalable and Optimized Attribute Revocation in Cloud Computing

    Somchart FUGKEAW  Hiroyuki SATO  

     
    PAPER

      Pubricized:
    2017/02/08
      Page(s):
    973-983

    Revocation is one of the major problems for access control systems. Especially, the revocation cost for the data outsourced in the third party environment such as cloud storage systems. The revocation in the cloud-based access control typically deals with the cryptographic operations that introduce costly overheads for key re-generation, file re-encryption, and key re-distribution. Also, the communication for retrieving files for re-encryption and loading them back to the cloud is another non-trivial cost for data owners. In this paper, we propose a Very Lightweight Proxy Re-Encryption (VL-PRE) scheme to efficiently support attribute-based revocation and policy update in the collaborative data sharing in cloud computing environment. To this end, we propose three-phase VL-PRE protocol including re-encryption key generation, re-encryption key update, and re-encryption key renewal for supporting the optimized attribute revocation and policy update. Finally, we conduct the experiments to evaluate the performance of our VL-PRE and show that it exhibits less computation cost with higher scalability in comparison with existing PRE schemes.

  • A Priority Control Method for Media Access Control Method SP-MAC to Improve Throughput of Bidirectional Flows

    Ryoma ANDO  Ryo HAMAMOTO  Hiroyasu OBATA  Chisa TAKANO  Kenji ISHIDA  

     
    PAPER

      Pubricized:
    2017/02/08
      Page(s):
    984-993

    In IEEE802.11 Wireless Local Area Networks (WLANs), frame collisions occur drastically when the number of wireless terminals connecting to the same Access Point (AP) increases. It causes the decrease of the total throughput of all terminals. To solve this issue, the authors have proposed a new media access control (MAC) method, Synchronized Phase MAC (SP-MAC), based on the synchronization phenomena of coupled oscillators. We have addressed the network environment in which only uplink flows from the wireless terminal to an AP exist. However, it is necessary to take into consideration of the real network environment in which uplink and downlink flows are generated simultaneously. If many bidirectional data flows exist in the WLAN, the AP receives many frames from both uplink and downlink by collision avoidance of SP-MAC. As a result, the total throughput decreases by buffer overflow in the AP. In this paper, we propose a priority control method based on SP-MAC for avoiding the buffer overflow in the AP under the bidirectional environment. Also, we show that the proposed method has an effect for improving buffer overflow in the AP and total throughput by the simulation.

  • Regular Section
  • An Efficient Approximate Algorithm for the 1-Median Problem on a Graph

    Koji TABATA  Atsuyoshi NAKAMURA  Mineichi KUDO  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2017/01/23
      Page(s):
    994-1002

    We propose a heuristic approximation algorithm for the 1-median problem. The 1-median problem is the problem of finding a vertex with the highest closeness centrality. Starting from a randomly selected vertex, our algorithm repeats to find a vertex with higher closeness centrality by approximately calculating closeness centrality of each vertex using simpler spanning subgraphs, which are called k-neighbor dense shortest path graphs with shortcuts. According to our experimental results using real networks with more than 10,000 vertices, our algorithm is more than 100 times faster than the exhaustive search and more than 20 times faster than the state-of-the-art approximation algorithm using annotated information to the vertices while the solutions output by our algorithm have higher approximation ratio.

  • A High Performance FPGA-Based Sorting Accelerator with a Data Compression Mechanism

    Ryohei KOBAYASHI  Kenji KISE  

     
    PAPER-Computer System

      Pubricized:
    2017/01/30
      Page(s):
    1003-1015

    Sorting is an extremely important computation kernel that has been accelerated in a lot of fields such as databases, image processing, and genome analysis. Given that advent of Internet of Things (IoT) era due to mobile technology progressions, the future needs a sorting method that is available on any environment, such as not only high performance systems like servers but also low computational performance machines like embedded systems. In this paper, we present an FPGA-based sorting accelerator combining Sorting Network and Merge Sorter Tree, which is customizable by means of tuning design parameters. The proposed FPGA accelerator sorts data sent from a host PC via the PCIe bus, and sends back the fully sorted data sequence to it. We also present a detailed analytical model that accurately estimates the sorting performance. Due to these characteristics, designers can know how fast a developed sorting hardware is in advance and can implement the best one to fulfill the cost and performance constraints. Our experiments show that the proposed hardware achieves up to 19.5x sorting performance, compared with Intel Core i7-3770K operating at 3.50GHz, when sorting 256M 32-bits integer elements. However, this result is limited because of insufficient memory bandwidth. To overcome this problem, we propose a data compression mechanism and the experimental result shows that the sorting hardware with it achieves almost 90% of the estimated performance, while the hardware without it does about 60%. In order to allow every designer to easily and freely use this accelerator, the RTL source code is released as open-source hardware.

  • A Fast and Accurate FPGA System for Short Read Mapping Based on Parallel Comparison on Hash Table

    Yoko SOGABE  Tsutomu MARUYAMA  

     
    PAPER-Computer System

      Pubricized:
    2017/01/30
      Page(s):
    1016-1025

    The purpose of DNA sequencing is to determine the order of nucleotides within a DNA molecule of target. The target DNA molecules are fragmented into short reads, which are short fixed-length subsequences composed of ‘A’, ‘C’, ‘G’ ‘T’, by next generation sequencing (NGS) machine. To reconstruct the target DNA from the short reads using a reference genome, which is a representative example of a species that was constructed in advance, it is necessary to determine their locations in the target DNA from where they have been extracted by aligning them onto the reference genome. This process is called short read mapping, and it is important to improve the performance of the short read mapping to realize fast DNA sequencing. We propose three types of FPGA acceleration methods based on hash table; (1) sorting and parallel comparison, (2) matching that allows one mutation to reduce the number of the candidates, (3) optimized hash function using variable masks. The first one reduces the number of accesses to off-chip memory to avoid the bottleneck by access latency. The second one enables to reduce the number of the candidates without degrading mapping sensitivity by allowing one mutation in the comparison. The last one reduces hash collisions using a table that was calculated from the reference genome in advance. We implemented the three methods on Xilinx Virtex-7 and evaluated them to show their effectiveness of them. In our experiments, our system achieves 20 fold of processing speed compared with BWA, which is one of the most popular mapping tools. Furthermore, we shows that the our system outperforms one of the fastest FPGA short read mapping systems.

  • A Minimalist's Reversible While Language

    Robert GLÜCK  Tetsuo YOKOYAMA  

     
    PAPER-Software System

      Pubricized:
    2017/02/06
      Page(s):
    1026-1034

    The paper presents a small reversible language R-CORE, a structured imperative programming language with symbolic tree-structured data (S-expressions). The language is reduced to the core of a reversible language, with a single command for reversibly updating the store, a single reversible control-flow operator, a limited number of variables, and data with a single atom and a single constructor. Despite its extreme simplicity, the language is reversibly universal, which means that it is as powerful as any reversible language can be, while it is linear-time self-interpretable, and it allows reversible programming with dynamic data structures. The four-line program inverter for R-CORE is among the shortest existing program inverters, which demonstrates the conciseness of the language. The translator to R-CORE, which is used to show the formal properties of the language, is clean and modular, and it may serve as a model for related reversible translation problems. The goal is to provide a language that is sufficiently concise for theoretical investigations. Owing to its simplicity, the language may also be used for educational purposes.

  • Fast Persistent Heap Based on Non-Volatile Memory

    Wenzhe ZHANG  Kai LU  Xiaoping WANG  Jie JIAN  

     
    PAPER-Software System

      Pubricized:
    2017/02/01
      Page(s):
    1035-1045

    New volatile memory (e.g. Phase Change Memroy) presents fast access, large capacity, byte-addressable, and non-volatility features. These features will bring impacts on the design of current software system. It has become a hot research topic of how to manage it and provide what kind of interface for upper application to use it. This paper proposes FP-Heap. FP-Heap supports direct access to non-volatile memory through a persistent heap interface. With FP-Heap, traditional persistent object systems can benefit directly from the byte-persistency of non-volatile memory. FP-Heap extends current virtual memory manager (VMM) to manage non-volatile memory and maintain a persistent mapping relationship. Also, FP-Heap offers a lightweight transaction mechanism to support atomic update of persistent data, a simple namespace to facilitate data indexing, and a basic access control mechanism to support data sharing. Compared with previous work Mnemosyne, FP-Heap achieves higher performance by its customized VMM and optimized transaction mechanism.

  • A Defense Mechanism of Random Routing Mutation in SDN

    Jiang LIU  Hongqi ZHANG  Zhencheng GUO  

     
    PAPER-Information Network

      Pubricized:
    2017/02/21
      Page(s):
    1046-1054

    Focused on network reconnaissance, eavesdropping, and DoS attacks caused by static routing policies, this paper designs a random routing mutation architecture based on the OpenFlow protocol, which takes advantages of the global network view and centralized control in a software-defined network. An entropy matrix of network traffic characteristics is constructed by using volume measurements and characteristic measurements of network traffic. Random routing mutation is triggered according to the result of network anomaly detection, which using a wavelet transform and principal component analysis to handle the above entropy matrix for both spatial and temporal correlations. The generation of a random routing path is specified as a 0-1 knapsack problem, which is calculated using an improved ant colony algorithm. Theoretical analysis and simulation results show that the proposed method not only increases the difficulty of network reconnaissance and eavesdropping but also reduces the impact of DoS attacks on the normal communication in an SDN network.

  • SMT-Based Scheduling for Overloaded Real-Time Systems

    Zhuo CHENG  Haitao ZHANG  Yasuo TAN  Yuto LIM  

     
    PAPER-Dependable Computing

      Pubricized:
    2017/01/23
      Page(s):
    1055-1066

    In a real-time system, tasks are required to be completed before their deadlines. Under normal workload conditions, a scheduler with a proper scheduling policy can make all the tasks meet their deadlines. However, in practical environment, system workload may vary widely. Once system workload becomes too heavy, so that there does not exist a feasible schedule can make all the tasks meet their deadlines, we say the system is overloaded under which some tasks will miss their deadlines. To alleviate the degrees of system performance degradation caused by the missed deadline tasks, the design of scheduling is crucial. Many design objectives can be considered. In this paper, we first focus on maximizing the total number of tasks that can be completed before their deadlines. A scheduling method based on satisfiability modulo theories (SMT) is proposed. In the method, the problem of scheduling is treated as a satisfiability problem. The key work is to formalize the satisfiability problem using first-order language. After the formalization, a SMT solver (e.g., Z3, Yices) is employed to solve this satisfiability problem. An optimal schedule can be generated based on the solution model returned by the SMT solver. The correctness of this method and the optimality of the generated schedule can be verified in a straightforward manner. The time efficiency of the proposed method is demonstrated through various simulations. Moreover, in the proposed SMT-based scheduling method, we define the scheduling constraints as system constraints and target constraints. This means if we want to design scheduling to achieve other objectives, only the target constraints need to be modified. To demonstrate this advantage, we adapt the SMT-based scheduling method to other design objectives: maximizing effective processor utilization and maximizing obtained values of completed tasks. Only very little changes are needed in the adaption procedure, which means the proposed SMT-based scheduling method is flexible and sufficiently general.

  • LTDE: A Layout Tree Based Approach for Deep Page Data Extraction

    Jun ZENG  Feng LI  Brendan FLANAGAN  Sachio HIROKAWA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/02/21
      Page(s):
    1067-1078

    Content extraction from deep Web pages has received great attention in recent years. However, the increasingly complicated HTML structure of Web documents makes it more difficult to recognize the data records by only analyzing the HTML source code. In this paper, we propose a method named LTDE to extract data records from a deep Web page. Instead of analyzing the HTML source code, LTDE utilizes the visual features of data records in deep Web pages. A Web page is considered as a finite set of visual blocks. The data records are the visual blocks that have similar layout. We also propose a pattern recognizing method named layout tree to cluster the similar layout visual blocks. The weight of all clusters is calculated, and the visual blocks in the cluster that has the highest weight are chosen as the data records to be extracted. The experiment results show that LTDE has higher effectiveness and better robustness for Web data extraction compared to previous works.

  • Set-Based Boosting for Instance-Level Transfer on Multi-Classification

    Haibo YIN  Jun-an YANG  Wei WANG  Hui LIU  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/01/26
      Page(s):
    1079-1086

    Transfer boosting, a branch of instance-based transfer learning, is a commonly adopted transfer learning method. However, currently popular transfer boosting methods focus on binary classification problems even though there are many multi-classification tasks in practice. In this paper, we developed a new algorithm called MultiTransferBoost on the basis of TransferBoost for multi-classification. MultiTransferBoost firstly separated the multi-classification problem into several orthogonal binary classification problems. During each iteration, MultiTransferBoost boosted weighted instances from different source domains while each instance's weight was assigned and updated by evaluating the difficulty of the instance being correctly classified and the “transferability” of the instance's corresponding source domain to the target. The updating process repeated until it reached the predefined training error or iteration number. The weight update factors, which were analyzed and adjusted to minimize the Hamming loss of the output coding, strengthened the connections among the sub binary problems during each iteration. Experimental results demonstrated that MultiTransferBoost had better classification performance and less computational burden than existing instance-based algorithms using the One-Against-One (OAO) strategy.

  • An Improved Perceptual MBSS Noise Reduction with an SNR-Based VAD for a Fully Operational Digital Hearing Aid

    Zhaoyang GUO  Xin'an WANG  Bo WANG  Shanshan YONG  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/02/17
      Page(s):
    1087-1096

    This paper first reviews the state-of-the-art noise reduction methods and points out their vulnerability in noise reduction performance and speech quality, especially under the low signal-noise ratios (SNR) environments. Then this paper presents an improved perceptual multiband spectral subtraction (MBSS) noise reduction algorithm (NRA) and a novel robust voice activity detection (VAD) based on the amended sub-band SNR. The proposed SNR-based VAD can considerably increase the accuracy of discrimination between noise and speech frame. The simulation results show that the proposed NRA has better segmental SNR (segSNR) and perceptual evaluation of speech quality (PESQ) performance than other noise reduction algorithms especially under low SNR environments. In addition, a fully operational digital hearing aid chip is designed and fabricated in the 0.13 µm CMOS process based on the proposed NRA. The final chip implementation shows that the whole chip dissipates 1.3 mA at the 1.2 V operation. The acoustic test result shows that the maximum output sound pressure level (OSPL) is 114.6 dB SPL, the equivalent input noise is 5.9 dB SPL, and the total harmonic distortion is 2.5%. So the proposed digital hearing aid chip is a promising candidate for high performance hearing-aid systems.

  • Removal of Salt-and-Pepper Noise Using a High-Precision Frequency Analysis Approach

    Masaya HASEGAWA  Kazuki SAKASHITA  Kousei UCHIKOSHI  Shigeki HIROBAYASHI  Tadanobu MISAWA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2017/01/24
      Page(s):
    1097-1105

    A digital image is often deteriorated by impulse noise that may occur during processes such as transmission. An impulse noise converts the pixel data in the image into black (0) or white (255) values at a random frequency and is also called salt-and-pepper noise. In this paper, we identify the details of pixels that have been damaged by impulse noise by analyzing the frequency of the noisy image using non-harmonic analysis (NHA). From experimental results, we can confirm that this method shows superior performance compared to the recent PSNR denoising method. In addition, we show that the proposed method is particularly superior in eliminating impulse noise in images with high noise rates.

  • Correcting Syntactic Annotation Errors Based on Tree Mining

    Kanta SUZUKI  Yoshihide KATO  Shigeki MATSUBARA  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/01/23
      Page(s):
    1106-1113

    This paper provides a new method to correct annotation errors in a treebank. The previous error correction method constructs a pseudo parallel corpus where incorrect partial parse trees are paired with correct ones, and extracts error correction rules from the parallel corpus. By applying these rules to a treebank, the method corrects errors. However, this method does not achieve wide coverage of error correction. To achieve wide coverage, our method adopts a different approach. In our method, we consider that if an infrequent pattern can be transformed to a frequent one, then it is an annotation error pattern. Based on a tree mining technique, our method seeks such infrequent tree patterns, and constructs error correction rules each of which consists of an infrequent pattern and a corresponding frequent pattern. We conducted an experiment using the Penn Treebank. We obtained 1,987 rules which are not constructed by the previous method, and the rules achieved good precision.

  • Robust Singing Transcription System Using Local Homogeneity in the Harmonic Structure

    Hoon HEO  Kyogu LEE  

     
    PAPER-Music Information Processing

      Pubricized:
    2017/02/18
      Page(s):
    1114-1123

    Automatic music transcription from audio has long been one of the most intriguing problems and a challenge in the field of music information retrieval, because it requires a series of low-level tasks such as onset/offset detection and F0 estimation, followed by high-level post-processing for symbolic representation. In this paper, a comprehensive transcription system for monophonic singing voice based on harmonic structure analysis is proposed. Given a precise tracking of the fundamental frequency, a novel acoustic feature is derived to signify the harmonic structure in singing voice signals, regardless of the loudness and pitch. It is then used to generate a parametric mixture model based on the von Mises-Fisher distribution, so that the model represents the intrinsic harmonic structures within a region of smoothly connected notes. To identify the note boundaries, the local homogeneity in the harmonic structure is exploited by two different methods: the self-similarity analysis and hidden Markov model. The proposed system identifies the note attributes including the onset time, duration and note pitch. Evaluations are conducted from various aspects to verify the performance improvement of the proposed system and its robustness, using the latest evaluation methodology for singing transcription. The results show that the proposed system significantly outperforms other systems including the state-of-the-art systems.

  • RRWL: Round Robin-Based Wear Leveling Using Block Erase Table for Flash Memory

    Seon Hwan KIM  Ju Hee CHOI  Jong Wook KWAK  

     
    LETTER-Software System

      Pubricized:
    2017/01/30
      Page(s):
    1124-1127

    In this letter, we propose a round robin-based wear leveling (RRWL) for flash memory systems. RRWL uses a block erase table (BET), which is composed of a bit array and saves the erasure histories of blocks. BET can use one-to-one mode to increase the performance of wear leveling or one-to-many mode to reduce memory consumption. However, one-to-many mode decreases the accuracy of cold block information, which results in the lifetime degradation of flash memory. To solve this problem, RRWL consistently uses one-to-one mode based on round robin method to increase the accuracy of cold block identification, with reduced memory size of BET, like in one-to-many mode. Experiments show that RRWL increases the lifetime of flash memory by up to 47% and 14%, compared with BET and HaWL, respectively.

  • Change-Prone Java Method Prediction by Focusing on Individual Differences in Comment Density

    Aji ERY BURHANDENNY  Hirohisa AMAN  Minoru KAWAHARA  

     
    LETTER-Software Engineering

      Pubricized:
    2017/02/15
      Page(s):
    1128-1131

    This paper focuses on differences in comment densities among individual programmers, and proposes to adjust the conventional code complexity metric (the cyclomatic complexity) by using the abnormality of the comment density. An empirical study with nine popular open source Java products (including 103,246 methods) shows that the proposed metric performs better than the conventional one in predicting change-prone methods; the proposed metric improves the area under the ROC curve (AUC) by about 3.4% on average.

  • Detecting Transportation Modes Using Deep Neural Network

    Hao WANG  GaoJun LIU  Jianyong DUAN  Lei ZHANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/02/15
      Page(s):
    1132-1135

    Existing studies on transportation mode detection from global positioning system (GPS) trajectories mainly adopt handcrafted features. These features require researchers with a professional background and do not always work well because of the complexity of traffic behavior. To address these issues, we propose a model using a sparse autoencoder to extract point-level deep features from point-level handcrafted features. A convolution neural network then aggregates the point-level deep features and generates a trajectory-level deep feature. A deep neural network incorporates the trajectory-level handcrafted features and the trajectory-level deep feature for detecting the users' transportation modes. Experiments conducted on Microsoft's GeoLife data show that our model can automatically extract the effective features and improve the accuracy of transportation mode detection. Compared with the model using only handcrafted features and shallow classifiers, the proposed model increases the maximum accuracy by 6%.

  • Learning Corpus-Invariant Discriminant Feature Representations for Speech Emotion Recognition

    Peng SONG  Shifeng OU  Zhenbin DU  Yanyan GUO  Wenming MA  Jinglei LIU  Wenming ZHENG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2017/02/02
      Page(s):
    1136-1139

    As a hot topic of speech signal processing, speech emotion recognition methods have been developed rapidly in recent years. Some satisfactory results have been achieved. However, it should be noted that most of these methods are trained and evaluated on the same corpus. In reality, the training data and testing data are often collected from different corpora, and the feature distributions of different datasets often follow different distributions. These discrepancies will greatly affect the recognition performance. To tackle this problem, a novel corpus-invariant discriminant feature representation algorithm, called transfer discriminant analysis (TDA), is presented for speech emotion recognition. The basic idea of TDA is to integrate the kernel LDA algorithm and the similarity measurement of distributions into one objective function. Experimental results under the cross-corpus conditions show that our proposed method can significantly improve the recognition rates.

  • A Simple and Fast CU Division Algorithm for HEVC Intra Prediction

    Yankang WANG  Ryota TAKAGI  Genki YOSHITAKE  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/02/06
      Page(s):
    1140-1143

    High Efficiency Video Coding is a new video coding standard after H.264/AVC. By introducing a flexible coding unit, which can be recursively divided from 64×64 to 8×8 blocks in a Quadtree-Structure, HEVC achieves significantly higher coding efficiency than the previous standards. With the flexible CU structure, HEVC can effectively adapt to highly varying contents with a smaller CU or to flat contents with a larger CU, making it suitable for applications from mobile video to super high definition television. On the other hand, CU division does incur high computational cost for HEVC. In this paper, we propose a simple and fast CU division algorithm by using only a subset of pixels to determine when CU division happens. Experiment results show that our algorithm can achieve prediction quality close to HEVC Test Model with much lower computational cost.

  • Unsupervised Image Steganalysis Method Using Self-Learning Ensemble Discriminant Clustering

    Bing CAO  Guorui FENG  Zhaoxia YIN  Lingyan FAN  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/02/18
      Page(s):
    1144-1147

    Image steganography is a technique of embedding secret message into a digital image to securely send the information. In contrast, steganalysis focuses on detecting the presence of secret messages hidden by steganography. The modern approach in steganalysis is based on supervised learning where the training set must include the steganographic and natural image features. But if a new method of steganography is proposed, and the detector still trained on existing methods will generally lead to the serious detection accuracy drop due to the mismatch between training and detecting steganographic method. In this paper, we just attempt to process unsupervised learning problem and propose a detection model called self-learning ensemble discriminant clustering (SEDC), which aims at taking full advantage of the statistical property of the natural and testing images to estimate the optimal projection vector. This method can adaptively select the most discriminative subspace and then use K-means clustering to generate the ultimate class labels. Experimental results on J-UNIWARD and nsF5 steganographic methods with three feature extraction methods such as CC-JRM, DCTR, GFR show that the proposed scheme can effectively classification better than blind speculation.

  • Data-Adapted Volume Rendering for Scattered Point Data

    Junda ZHANG  Libing JIANG  Longxing KONG  Li WANG  Xiao'an TANG  

     
    LETTER-Computer Graphics

      Pubricized:
    2017/02/15
      Page(s):
    1148-1151

    In this letter, we present a novel method for reconstructing continuous data field from scattered point data, which leads to a more characteristic visualization result by volume rendering. The gradient distribution of scattered point data is analyzed for local feature investigation via singular-value decomposition. A data-adaptive ellipsoidal shaped function is constructed as the penalty function to evaluate point weight coefficient in MLS approximation. The experimental results show that the proposed method can reduce the reconstruction error and get a visualization with better feature discrimination.

  • Low-Complexity Recursive-Least-Squares-Based Online Nonnegative Matrix Factorization Algorithm for Audio Source Separation

    Seokjin LEE  

     
    LETTER-Music Information Processing

      Pubricized:
    2017/02/06
      Page(s):
    1152-1156

    An online nonnegative matrix factorization (NMF) algorithm based on recursive least squares (RLS) is described in a matrix form, and a simplified algorithm for a low-complexity calculation is developed for frame-by-frame online audio source separation system. First, the online NMF algorithm based on the RLS method is described as solving the NMF problem recursively. Next, a simplified algorithm is developed to approximate the RLS-based online NMF algorithm with low complexity. The proposed algorithm is evaluated in terms of audio source separation, and the results show that the performance of the proposed algorithms are superior to that of the conventional online NMF algorithm with significantly reduced complexity.