The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E101-D No.3  (Publication Date:2018/03/01)

    Special Section on Foundations of Computer Science — Frontiers of Theoretical Computer Science —
  • FOREWORD Open Access

    Shuji KIJIMA  

     
    FOREWORD

      Page(s):
    573-573
  • Complexity of the Minimum Single Dominating Cycle Problem for Graph Classes

    Hiroshi ETO  Hiroyuki KAWAHARA  Eiji MIYANO  Natsuki NONOUE  

     
    PAPER

      Pubricized:
    2017/12/19
      Page(s):
    574-581

    In this paper, we study a variant of the MINIMUM DOMINATING SET problem. Given an unweighted undirected graph G=(V,E) of n=|V| vertices, the goal of the MINIMUM SINGLE DOMINATING CYCLE problem (MinSDC) is to find a single shortest cycle which dominates all vertices, i.e., a cycle C such that for the set V(C) of vertices in C and the set N(V(C)) of neighbor vertices of C, V(G)=V(C)∪N(V(C)) and |V(C)| is minimum over all dominating cycles in G [6], [17], [24]. In this paper we consider the (in)approximability of MinSDC if input graphs are restricted to some special classes of graphs. We first show that MinSDC is still NP-hard to approximate even when restricted to planar, bipartite, chordal, or r-regular (r≥3). Then, we show the (lnn+1)-approximability and the (1-ε)lnn-inapproximability of MinSDC on split graphs under PNP. Furthermore, we explicitly design a linear-time algorithm to solve MinSDC for graphs with bounded treewidth and estimate the hidden constant factor of its running time-bound.

  • Polynomial Time Learnability of Graph Pattern Languages Defined by Cographs

    Takayoshi SHOUDAI  Yuta YOSHIMURA  Yusuke SUZUKI  Tomoyuki UCHIDA  Tetsuhiro MIYAHARA  

     
    PAPER

      Pubricized:
    2017/12/19
      Page(s):
    582-592

    A cograph (complement reducible graph) is a graph which can be generated by disjoint union and complement operations on graphs, starting with a single vertex graph. Cographs arise in many areas of computer science and are studied extensively. With the goal of developing an effective data mining method for graph structured data, in this paper we introduce a graph pattern expression, called a cograph pattern, which is a special type of cograph having structured variables. Firstly, we show that a problem whether or not a given cograph pattern g matches a given cograph G is NP-complete. From this result, we consider the polynomial time learnability of cograph pattern languages defined by cograph patterns having variables labeled with mutually different labels, called linear cograph patterns. Secondly, we present a polynomial time matching algorithm for linear cograph patterns. Next, we give a polynomial time algorithm for obtaining a minimally generalized linear cograph pattern which explains given positive data. Finally, we show that the class of linear cograph pattern languages is polynomial time inductively inferable from positive data.

  • Approximate Frequent Pattern Discovery in Compressed Space

    Shouhei FUKUNAGA  Yoshimasa TAKABATAKE  Tomohiro I  Hiroshi SAKAMOTO  

     
    PAPER

      Pubricized:
    2017/12/19
      Page(s):
    593-601

    A grammar compression is a restricted context-free grammar (CFG) that derives a single string deterministically. The goal of a grammar compression algorithm is to develop a smaller CFG by finding and removing duplicate patterns, which is simply a frequent pattern discovery process. Any frequent pattern can be obtained in linear time; however, a huge working space is required for longer patterns, and the entire string must be preloaded into memory. We propose an online algorithm to address this problem approximately within compressed space. For an input sequence of symbols, a1,a2,..., let Gi be a grammar compression for the string a1a2ai. In this study, an online algorithm is considered one that can compute Gi+1 from (Gi,ai+1) without explicitly decompressing Gi. Here, let G be a grammar compression for string S. We say that variable X approximates a substring P of S within approximation ratio δ iff for any interval [i,j] with P=S[i,j], the parse tree of G has a node labeled with X that derives S[l,r] for a subinterval [l,r] of [i,j] satisfying |[l,r]|≥δ|[i,j]|. Then, G solves the frequent pattern discovery problem approximately within δ iff for any frequent pattern P of S, there exists a variable that approximates P within δ. Here, δ is called the approximation ratio of G for S. Previously, the best approximation ratio obtained by a polynomial time algorithm was Ω(1/lg2|P|). The main contribution of this work is to present a new lower bound Ω(1/<*|S|lg|P|) that is smaller than the previous bound when lg*|S|<lg|P|. Experimental results demonstrate that the proposed algorithm extracts sufficiently long frequent patterns and significantly reduces memory consumption compared to the offline algorithm in the previous work.

  • Byzantine-Tolerant Gathering of Mobile Agents in Arbitrary Networks with Authenticated Whiteboards

    Masashi TSUCHIDA  Fukuhito OOSHITA  Michiko INOUE  

     
    PAPER

      Pubricized:
    2017/12/19
      Page(s):
    602-610

    We propose an algorithm for the gathering problem of mobile agents in arbitrary networks (graphs) with Byzantine agents. Our algorithm can make all correct agents meet at a single node in O(fm) time (f is the upper bound of the number of Byzantine agents and m is the number of edges) under the assumption that agents have unique ID and behave synchronously, each node is equipped with an authenticated whiteboard, and f is known to agents. Here, the whiteboard is a node memory where agents can leave information. Since the existing algorithm achieves gathering without a whiteboard in Õ(n9λ) time, where n is the number of nodes and λ is the length of the longest ID, our algorithm shows an authenticated whiteboard can significantly reduce the time for the gathering problem in Byzantine environments.

  • Polynomial-Space Exact Algorithms for the Bipartite Traveling Salesman Problem

    Mohd SHAHRIZAN OTHMAN  Aleksandar SHURBEVSKI  Hiroshi NAGAMOCHI  

     
    LETTER

      Pubricized:
    2017/12/19
      Page(s):
    611-612

    Given an edge-weighted bipartite digraph G=(A,B;E), the Bipartite Traveling Salesman Problem (BTSP) asks to find the minimum cost of a Hamiltonian cycle of G, or determine that none exists. When |A|=|B|=n, the BTSP can be solved using polynomial space in O*(42nnlog n) time by using the divide-and-conquer algorithm of Gurevich and Shelah (SIAM Journal of Computation, 16(3), pp.486-502, 1987). We adapt their algorithm for the bipartite case, and show an improved time bound of O*(42n), saving the nlog n factor.

  • A Heuristic for Constructing Smaller Automata Based on Suffix Sorting and Its Application in Network Security

    Inbok LEE  Victor C. VALGENTI  Min S. KIM  Sung-il OH  

     
    LETTER

      Pubricized:
    2017/12/19
      Page(s):
    613-615

    In this paper we show a simple heuristic for constructing smaller automata for a set of regular expressions, based on suffix sorting: finding common prefixes and suffixes in regular expressions and merging them. It is an important problem in network security. We applied our approach to random and real-world regular expressions. Experimental results showed that our approach yields up to 12 times enhancement in throughput.

  • Regular Section
  • A General Low-Cost Fast Hybrid Reconfiguration Architecture for FPGA-Based Self-Adaptive System

    Rui YAO  Ping ZHU  Junjie DU  Meiqun WANG  Zhaihe ZHOU  

     
    PAPER-Computer System

      Pubricized:
    2017/12/18
      Page(s):
    616-626

    Evolvable hardware (EHW) based on field-programmable gate arrays (FPGAs) opens up new possibilities towards building efficient adaptive system. State of the art EHW systems based on virtual reconfiguration and dynamic partial reconfiguration (DPR) both have their limitations. The former has a huge area overhead and circuit delay, and the later has slow configuration speed and low flexibility. Therefore a general low-cost fast hybrid reconfiguration architecture is proposed in this paper, which merges the high flexibility of virtual reconfiguration and the low resource cost of DPR. Moreover, the bitstream relocation technology is introduced to save the bitstream storage space, and the discrepancy configuration technology is adopted to reduce reconfiguration time. And an embedded RAM core is adopted to store bitstreams which accelerate the reconfiguration speed further. The proposed architecture is evaluated by the online evolution of digital image filter implemented on the Xilinx Virtex-6 FPGA development board ML605. And the experimental results show that our system has lower resource overhead, higher operating frequency, faster reconfiguration speed and less bitstream storage space in comparison with the previous works.

  • An Efficient Parallel Coding Scheme in Erasure-Coded Storage Systems

    Wenrui DONG  Guangming LIU  

     
    PAPER-Computer System

      Pubricized:
    2017/12/12
      Page(s):
    627-643

    Erasure codes have been considered as one of the most promising techniques for data reliability enhancement and storage efficiency in modern distributed storage systems. However, erasure codes often suffer from a time-consuming coding process which makes them nearly impractical. The opportunity to solve this problem probably rely on the parallelization of erasure-code-based application on the modern multi-/many-core processors to fully take advantage of the adequate hardware resources on those platforms. However, the complicated data allocation and limited I/O throughput pose a great challenge on the parallelization. To address this challenge, we propose a general multi-threaded parallel coding approach in this work. The approach consists of a general multi-threaded parallel coding model named as MTPerasure, and two detailed parallel coding algorithms, named as sdaParallel and ddaParallel, respectively, adapting to different I/O circumstances. MTPerasure is a general parallel coding model focusing on the high level data allocation, and it is applicable for all erasure codes and can be implemented without any modifications of the low level coding algorithms. The sdaParallel divides the data into several parts and the data parts are allocated to different threads statically in order to eliminate synchronization latency among multiple threads, which improves the parallel coding performance under the dummy I/O mode. The ddaParallel employs two threads to execute the I/O reading and writing on the basis of small pieces independently, which increases the I/O throughput. Furthermore, the data pieces are assigned to the coding thread dynamically. A special thread scheduling algorithm is also proposed to reduce thread migration latency. To evaluate our proposal, we parallelize the popular open source library jerasure based on our approach. And a detailed performance comparison with the original sequential coding program indicates that the proposed parallel approach outperforms the original sequential program by an extraordinary speedups from 1.4x up to 7x, and achieves better utilization of the computation and I/O resources.

  • Comparative Study between Two Approaches Using Edit Operations and Code Differences to Detect Past Refactorings

    Takayuki OMORI  Katsuhisa MARUYAMA  

     
    PAPER-Software Engineering

      Pubricized:
    2017/11/27
      Page(s):
    644-658

    Understanding which refactoring transformations were performed is in demand in modern software constructions. Traditionally, many researchers have been tackling understanding code changes with history data derived from version control systems. In those studies, problems of the traditional approach are pointed out, such as entanglement of multiple changes. To alleviate the problems, operation histories on IDEs' code editors are available as a new source of software evolution data nowadays. By replaying such histories, we can investigate past code changes in a fine-grained level. However, the prior studies did not provide enough evidence of their effectiveness for detecting refactoring transformations. This paper describes an experiment in which participants detect refactoring transformations performed by other participants after investigating the code changes with an operation-replay tool and diff tools. The results show that both approaches have their respective factors that pose misunderstanding and overlooking of refactoring transformations. Two negative factors on divided operations and generated compound operations were observed in the operation-based approach, whereas all the negative factors resulted from three problems on tangling, shadowing, and out-of-order of code changes in the difference-based approach. This paper also shows seven concrete examples of participants' mistakes in both approaches. These findings give us hints for improving existing tools for understanding code changes and detecting refactoring transformations.

  • Efficient Parallel Join Processing Exploiting SIMD in Multi-Thread Environments

    Gilseok HONG  Seonghyeon KANG  Chang soo KIM  Jun-Ki MIN  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/12/14
      Page(s):
    659-667

    In this paper, we study parallel join processing to improve the performance of the merge phase of sort-merge join by integrating all parallelism provided by mainstream CPUs. Modern CPUs support SIMD instruction sets with wider SIMD registers which allows to process multiple data items per each instruction. Thus, we devise an efficient parallel join algorithm, called Parallel Merge Join with SIMD instructions (PMJS). In our proposed algorithm, we utilize data parallelism by exploiting SIMD instructions. And we also accelerate the performance by avoiding the usage of conditional branch instructions. Furthermore, to take advantage of the multiple cores, our proposed algorithm is threaded in multi-thread environments. In our multi-thread algorithm, to distribute workload evenly to each thread, we devise an efficient workload balancing algorithm based on the kernel density estimator which allows to estimate the workload of each thread accurately.

  • Stochastic Divergence Minimization for Biterm Topic Models

    Zhenghang CUI  Issei SATO  Masashi SUGIYAMA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/12/20
      Page(s):
    668-677

    As the emergence and the thriving development of social networks, a huge number of short texts are accumulated and need to be processed. Inferring latent topics of collected short texts is an essential task for understanding its hidden structure and predicting new contents. A biterm topic model (BTM) was recently proposed for short texts to overcome the sparseness of document-level word co-occurrences by directly modeling the generation process of word pairs. Stochastic inference algorithms based on collapsed Gibbs sampling (CGS) and collapsed variational inference have been proposed for BTM. However, they either require large computational complexity, or rely on very crude estimation that does not preserve sufficient statistics. In this work, we develop a stochastic divergence minimization (SDM) inference algorithm for BTM to achieve better predictive likelihood in a scalable way. Experiments show that SDM-BTM trained by 30% data outperforms the best existing algorithm trained by full data.

  • A Predictive Logistic Regression Based Doze Mode Energy-Efficiency Mechanism in EPON

    MohammadAmin LOTFOLAHI  Cheng-Zen YANG  I-Shyan HWANG  AliAkbar NIKOUKAR  Yu-Hua WU  

     
    PAPER-Information Network

      Pubricized:
    2017/12/18
      Page(s):
    678-684

    Ethernet passive optical network (EPON) is one of the energy-efficient access networks. Many studies have been done to reach maximum energy saving in the EPON. However, it is a trade-off between achieving maximum energy saving and guaranteeing QoS. In this paper, a predictive doze mode mechanism in an enhanced EPON architecture is proposed to achieve energy saving by using a logistic regression (LR) model. The optical line terminal (OLT) in the EPON employs an enhanced Doze Manager practicing the LR model to predict the doze periods of the optical network units (ONUs). The doze periods are estimated more accurately based on the historical high-priority traffic information, and logistic regression DBA (LR-DBA) performs dynamic bandwidth allocation accordingly. The proposed LR-DBA mechanism is compared with a scheme without energy saving (IPACT) and another scheme with energy saving (GDBA). Simulation results show that LR-DBA effectively improves the power consumption of ONUs in most cases, and the improvement can be up to 45% while it guarantees the QoS metrics, such as the high-priority traffic delay and jitter.

  • FCReducer: Locating Symmetric Cryptographic Functions on the Memory

    Ryoya FURUKAWA  Ryoichi ISAWA  Masakatu MORII  Daisuke INOUE  Koji NAKAO  

     
    PAPER-Information Network

      Pubricized:
    2017/12/14
      Page(s):
    685-697

    Malicious software (malware) poses various significant challenges. One is the need to retrieve plain-text messages transmitted between malware and herders through an encrypted network channel. Those messages (e.g., commands for malware) can be a useful hint to reveal their malicious activities. However, the retrieving is challenging even if the malware is executed on an analysis computer. To assist analysts in retrieving the plain-text from the memory, this paper presents FCReducer(Function Candidate Reducer), which provides a small candidate set of cryptographic functions called by malware. Given this set, an analyst checks candidates to locate cryptographic functions. If the decryption function is found, she then obtains its output as the plain-text. Although existing systems such as CipherXRay have been proposed to locate cryptographic functions, they heavily rely on fine-grained dynamic taint analysis (DTA). This makes them weak against under-tainting, which means failure of tracking data propagation. To overcome under-tainting, FCReducer conducts coarse-grained DTA and generates a typical data dependency graph of functions in which the root function accesses an encrypted message. This does not require fine-grained DTA. FCReducer then applies a community detection method such as InfoMap to the graph for detecting a community of functions that plays a role in decryption or encryption. The functions in this community are provided as candidates. With experiments using 12 samples including four malware specimens, we confirmed that FCReducer reduced, for example, 4830 functions called by Zeus malware to 0.87% as candidates. We also propose a heuristic to reduce candidates more greatly.

  • On the Optimal Approach of Survivable Virtual Network Embedding in Virtualized SDN

    Rongzhen LI  Qingbo WU  Yusong TAN  Junyang ZHANG  

     
    PAPER-Information Network

      Pubricized:
    2017/12/18
      Page(s):
    698-708

    Software-defined networking (SDN) has emerged as a promising approach to enable network innovation, which can provide network virtualization through a hypervisor plane to share the same cloud datacenter network among multiple virtual networks. While, this attractive approach may bring some new problem that leads to more susceptible to the failure of network component because of the separated control and forwarding planes. The centralized control and virtual network sharing the same physical network are becoming fragile and prone to failure if the topology of virtual network and the control path is not properly designed. Thus, how to map virtual network into physical datacenter network in virtualized SDN while guaranteeing the survivability against the failure of physical component is extremely important and should fully consider more influence factors on the survivability of virtual network. In this paper, combining VN with SDN, a topology-aware survivable virtual network embedding approach is proposed to improve the survivability of virtual network by an enhanced virtual controller embedding strategy to optimize the placement selection of virtual network without using any backup resources. The strategy explicitly takes account of the network delay and the number of disjoint path between virtual controller and virtual switch to minimize the expected percentage of control path loss with survivable factor. Extensive experimental evaluations have been conducted and the results verify that the proposed technology has improved the survivability and network delay while keeping the other within reasonable bounds.

  • On the Properties and Applications of Inconsistent Neighborhood in Neighborhood Rough Set Models

    Shujiao LIAO  Qingxin ZHU  Rui LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/20
      Page(s):
    709-718

    Rough set theory is an important branch of data mining and granular computing, among which neighborhood rough set is presented to deal with numerical data and hybrid data. In this paper, we propose a new concept called inconsistent neighborhood, which extracts inconsistent objects from a traditional neighborhood. Firstly, a series of interesting properties are obtained for inconsistent neighborhoods. Specially, some properties generate new solutions to compute the quantities in neighborhood rough set. Then, a fast forward attribute reduction algorithm is proposed by applying the obtained properties. Experiments undertaken on twelve UCI datasets show that the proposed algorithm can get the same attribute reduction results as the existing algorithms in neighborhood rough set domain, and it runs much faster than the existing ones. This validates that employing inconsistent neighborhoods is advantageous in the applications of neighborhood rough set. The study would provide a new insight into neighborhood rough set theory.

  • Efficient Reformulation of 1-Norm Ranking SVM

    Daiki SUEHIRO  Kohei HATANO  Eiji TAKIMOTO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/04
      Page(s):
    719-729

    Finding linear functions that maximize AUC scores is important in ranking research. A typical approach to the ranking problem is to reduce it to a binary classification problem over a new instance space, consisting of all pairs of positive and negative instances. Specifically, this approach is formulated as hard or soft margin optimization problems over pn pairs of p positive and n negative instances. Solving the optimization problems directly is impractical since we have to deal with a sample of size pn, which is quadratically larger than the original sample size p+n. In this paper, we reformulate the ranking problem as variants of hard and soft margin optimization problems over p+n instances. The resulting classifiers of our methods are guaranteed to have a certain amount of AUC scores.

  • Effects of Automated Transcripts on Non-Native Speakers' Listening Comprehension

    Xun CAO  Naomi YAMASHITA  Toru ISHIDA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2017/11/24
      Page(s):
    730-739

    Previous research has shown that transcripts generated by automatic speech recognition (ASR) technologies can improve the listening comprehension of non-native speakers (NNSs). However, we still lack a detailed understanding of how ASR transcripts affect listening comprehension of NNSs. To explore this issue, we conducted two studies. The first study examined how the current presentation of ASR transcripts impacted NNSs' listening comprehension. 20 NNSs engaged in two listening tasks, each in different conditions: C1) audio only and C2) audio+ASR transcripts. The participants pressed a button whenever they encountered a comprehension problem, and explained each problem in the subsequent interviews. From our data analysis, we found that NNSs adopted different strategies when using the ASR transcripts; some followed the transcripts throughout the listening; some only checked them when necessary. NNSs also appeared to face difficulties following imperfect and slightly delayed transcripts while listening to speech - many reported difficulties concentrating on listening/reading or shifting between the two. The second study explored how different display methods of ASR transcripts affected NNSs' listening experiences. We focused on two display methods: 1) accuracy-oriented display which shows transcripts only after the completion of speech input analysis, and 2) speed-oriented display which shows the interim analysis results of speech input. We conducted a laboratory experiment with 22 NNSs who engaged in two listening tasks with ASR transcripts presented via the two display methods. We found that the more the NNSs paid attention to listening to the audio, the more they tended to prefer the speed-oriented transcripts, and vice versa. Mismatched transcripts were found to have negative effects on NNSs' listening comprehension. Our findings have implications for improving the presentation methods of ASR transcripts to more effectively support NNSs.

  • Cybersecurity Education and Training Support System: CyRIS

    Razvan BEURAN  Cuong PHAM  Dat TANG  Ken-ichi CHINEN  Yasuo TAN  Yoichi SHINODA  

     
    PAPER-Educational Technology

      Pubricized:
    2017/11/24
      Page(s):
    740-749

    Given the worldwide proliferation of cyberattacks, it is imperative that cybersecurity education and training are addressed in a timely manner. These activities typically require trainees to do hands-on practice in order to consolidate and improve their skills, for which purpose training environments called cyber ranges are used. In this paper we present an open-source system named CyRIS (Cyber Range Instantiation System) that supports this endeavor by fully automating the training environment setup, thus making it possible for any organization to conduct more numerous and variate training activities. CyRIS uses a text-based representation in YAML format to describe the characteristics of the training environment, including both environment setup and security content generation. Based on this description, CyRIS automatically creates the corresponding cyber range instances on a computer and network infrastructure, for simultaneous use by multiple trainees. We have evaluated CyRIS in various realistic scenarios, and our results demonstrate its suitability for cybersecurity education and training, both in terms of features and performance, including for large-scale training sessions with hundreds of participants.

  • Classification of Utterances Based on Multiple BLEU Scores for Translation-Game-Type CALL Systems

    Reiko KUWA  Tsuneo KATO  Seiichi YAMAMOTO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/12/04
      Page(s):
    750-757

    This paper proposes a classification method of second-language-learner utterances for interactive computer-assisted language learning systems. This classification method uses three types of bilingual evaluation understudy (BLEU) scores as features for a classifier. The three BLEU scores are calculated in accordance with three subsets of a learner corpus divided according to the quality of utterances. For the purpose of overcoming the data-sparseness problem, this classification method uses the BLEU scores calculated using a mixture of word and part-of-speech (POS)-tag sequences converted from word sequences based on a POS-replacement rule according to which words are replaced with POS tags in n-grams. Experiments of classifying English utterances by Japanese demonstrated that the proposed classification method achieved classification accuracy of 78.2% which was 12.3 points higher than a baseline with one BLEU score.

  • Pose Estimation with Action Classification Using Global-and-Pose Features and Fine-Grained Action-Specific Pose Models

    Norimichi UKITA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/12/08
      Page(s):
    758-766

    This paper proposes an iterative scheme between human action classification and pose estimation in still images. Initial action classification is achieved only by global image features that consist of the responses of various object filters. The classification likelihood of each action weights human poses estimated by the pose models of multiple sub-action classes. Such fine-grained action-specific pose models allow us to robustly identify the pose of a target person under the assumption that similar poses are observed in each action. From the estimated pose, pose features are extracted and used with global image features for action re-classification. This iterative scheme can mutually improve action classification and pose estimation. Experimental results with a public dataset demonstrate the effectiveness of the proposed method both for action classification and pose estimation.

  • Person Identification Using Pose-Based Hough Forests from Skeletal Action Sequence

    Ju Yong CHANG  Ji Young PARK  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/12/04
      Page(s):
    767-777

    The present study considers an action-based person identification problem, in which an input action sequence consists of 3D skeletal data from multiple frames. Unlike previous approaches, the type of action is not pre-defined in this work, which requires the subject classifier to possess cross-action generalization capabilities. To achieve that, we present a novel pose-based Hough forest framework, in which each per-frame pose feature casts a probabilistic vote to the Hough space. Pose distribution is estimated from training data and then used to compute the reliability of the vote to deal with the unseen poses in the test action sequence. Experimental results with various real datasets demonstrate that the proposed method provides effective person identification results especially for the challenging cross-action person identification setting.

  • Corpus Expansion for Neural CWS on Microblog-Oriented Data with λ-Active Learning Approach

    Jing ZHANG  Degen HUANG  Kaiyu HUANG  Zhuang LIU  Fuji REN  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/12/08
      Page(s):
    778-785

    Microblog data contains rich information of real-world events with great commercial values, so microblog-oriented natural language processing (NLP) tasks have grabbed considerable attention of researchers. However, the performance of microblog-oriented Chinese Word Segmentation (CWS) based on deep neural networks (DNNs) is still not satisfying. One critical reason is that the existing microblog-oriented training corpus is inadequate to train effective weight matrices for DNNs. In this paper, we propose a novel active learning method to extend the scale of the training corpus for DNNs. However, due to a large amount of partially overlapped sentences in the microblogs, it is difficult to select samples with high annotation values from raw microblogs during the active learning procedure. To select samples with higher annotation values, parameter λ is introduced to control the number of repeatedly selected samples. Meanwhile, various strategies are adopted to measure the overall annotation values of a sample during the active learning procedure. Experiments on the benchmark datasets of NLPCC 2015 show that our λ-active learning method outperforms the baseline system and the state-of-the-art method. Besides, the results also demonstrate that the performances of the DNNs trained on the extended corpus are significantly improved.

  • GPU-Accelerated Stochastic Simulation of Biochemical Networks

    Pilsung KANG  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2017/12/20
      Page(s):
    786-790

    We present a GPU (graphics processing unit) accelerated stochastic algorithm implementation for simulating biochemical reaction networks using the latest NVidia architecture. To effectively utilize the massive parallelism offered by the NVidia Pascal hardware, we apply a set of performance tuning methods and guidelines such as exploiting the architecture's memory hierarchy in our algorithm implementation. Based on our experimentation results as well as comparative analysis using CPU-based implementations, we report our initial experiences on the performance of modern GPUs in the context of scientific computing.

  • A Highly Adaptive Lossless ECG Compression ASIC for Wireless Sensors Based on Hybrid Gomlomb Coding

    Jiahui LUO  Zhijian CHEN  Xiaoyan XIANG  Jianyi MENG  

     
    LETTER-Computer System

      Pubricized:
    2017/12/14
      Page(s):
    791-794

    This work presents a low-complexity lossless electrocardiogram (ECG) compression ASIC for wireless sensors. Three linear predictors aiming for different signal characteristics are provided for prediction based on a history table that records of the optimum predictors for recent samples. And unlike traditional methods using a unified encoder, the prediction error is encoded by a hybrid Golomb encoder combining Exp-Golomb and Golomb-Rice and can adaptively configure the encoding scheme according to the predictor selection. The novel adaptive prediction and encoding scheme contributes to a compression rate of 2.77 for the MIT-BIH Arrhythmia database. Implemented in 40nm CMOS process, the design takes a small gate count of 1.82K with 37.6nW power consumption under 0.9V supply voltage.

  • Pipelined Squarer for Unsigned Integers of Up to 12 Bits

    Seongjin CHOI  Hyeong-Cheol OH  

     
    LETTER-Computer System

      Pubricized:
    2017/12/06
      Page(s):
    795-798

    This paper proposes and analyzes a pipelining scheme for a hardware squarer that can square unsigned integers of up to 12 bits. Each stage is designed and adjusted such that stage delays are well balanced and that the critical path delay of the design does not exceed the reference value which is set up based on the analysis. The resultant design has the critical path delay of approximately 3.5 times a full-adder delay. In an implementation using an Intel Stratix V FPGA, the design operates at approximately 23% higher frequency than the comparable pipelined squarer provided in the Intel library.

  • Extraction of Library Update History Using Source Code Reuse Detection

    Kanyakorn JEWMAIDANG  Takashi ISHIO  Akinori IHARA  Kenichi MATSUMOTO  Pattara LEELAPRUTE  

     
    LETTER-Software Engineering

      Pubricized:
    2017/12/20
      Page(s):
    799-802

    This paper proposes a method to extract and visualize a library update history in a project. The method identifies reused library versions by comparing source code in a product with existing versions of the library so that developers can understand when their own copy of a library has been copied, modified, and updated.

  • How to Preserve User Anonymity in Password-Based Anonymous Authentication Scheme

    SeongHan SHIN  Kazukuni KOBARA  

     
    LETTER-Information Network

      Pubricized:
    2017/12/13
      Page(s):
    803-807

    A purpose of password-based anonymous authentication schemes is to provide not only password-based authentication but also user anonymity. In [19], Yang et al., proposed a password-based anonymous authentication scheme (we call it YZWB10 scheme) using the password-protected credentials. In this paper, we discuss user anonymity of the YZWB10 scheme [19] against a third-party attacker, who is much weaker than a malicious server. First, we show that a third-party attacker in the YZWB10 scheme can specify which user actually sent the login request to the server. This attack also indicates that the attacker can link different login requests to be sent later by the same user. Second, we give an effective countermeasure to this attack which does not require any security for storing users' password-protected credentials.

  • A Method for Gathering Sensor Data for Fish-Farm Monitoring Considering the Transmission-Range Volume

    Koichi ISHIDA  Yoshiaki TANIGUCHI  Nobukazu IGUCHI  

     
    LETTER-Information Network

      Pubricized:
    2017/12/12
      Page(s):
    808-811

    We have proposed a fish-farm monitoring system. In our system, the transmission range of acoustic waves from sensors attached to the undersides of the fish is not omnidirectional because of obstruction from the bodies of the fish. In addition, energy-efficient control is highly important in our system to avoid the need to replace the batteries. In this letter, we propose a data-gathering method for fish-farm monitoring without the use of control packets so that energy-efficient control is possible. Instead, our method uses the transmission-range volume as calculated from the location of the sensor node to determine the timing of packet transmission. Through simulation evaluations, we show that the data-gathering performance of our proposed method is better than that of comparative methods.

  • Self-Paced Learning with Statistics Uncertainty Prior

    Lihua GUO  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/13
      Page(s):
    812-816

    Self-paced learning (SPL) gradually trains the data from easy to hard, and includes more data into the training process in a self-paced manner. The advantage of SPL is that it has an ability to avoid bad local minima, and the system can improve the generalization performance. However, SPL's system needs an expert to judge the complexity of data at the beginning of training. Generally, this expert does not exist in the beginning, and is learned by gradually training the samples. Based on this consideration, we add an uncertainty of complexity judgment into SPL's system, and propose a self-paced learning with uncertainty prior (SPUP). For efficiently solving our system optimization function, an iterative optimization and statistical simulated annealing method are introduced. The final experimental results indicate that our SPUP has more robustness to the outlier and achieves higher accuracy and less error than SPL.

  • Research on Analytical Solution Tensor Voting

    Hongbin LIN  Zheng WU  Dong LEI  Wei WANG  Xiuping PENG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2017/12/01
      Page(s):
    817-820

    This letter presents a novel tensor voting mechanism — analytic tensor voting (ATV), to get rid of the difficulties in original tensor voting, especially the efficiency. One of the main advantages is its explicit voting formulations, which benefit the completion of tensor voting theory and computational efficiency. Firstly, new decaying function was designed following the basic spirit of decaying function in original tensor voting (OTV). Secondly, analytic stick tensor voting (ASTV) was formulated using the new decaying function. Thirdly, analytic plate and ball tensor voting (APTV, ABTV) were formulated through controllable stick tensor construction and tensorial integration. These make the each voting of tensor can be computed by several non-iterative matrix operations, improving the efficiency of tensor voting remarkably. Experimental results validate the effectiveness of proposed method.

  • Multiple Matrix Rank Minimization Approach to Audio Declipping

    Ryohei SASAKI  Katsumi KONISHI  Tomohiro TAKAHASHI  Toshihiro FURUKAWA  

     
    LETTER-Speech and Hearing

      Pubricized:
    2017/12/06
      Page(s):
    821-825

    This letter deals with an audio declipping problem and proposes a multiple matrix rank minimization approach. We assume that short-time audio signals satisfy the autoregressive (AR) model and formulate the declipping problem as a multiple matrix rank minimization problem. To solve this problem, an iterative algorithm is provided based on the iterative partial matrix shrinkage (IPMS) algorithm. Numerical examples show its efficiency.

  • A Color Restoration Method for Irreversible Thermal Paint Based on Atmospheric Scattering Model

    Zhan WANG  Ping-an DU  Jian LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/12/08
      Page(s):
    826-829

    Irreversible thermal paints or temperature sensitive paints are a kind of special temperature sensor which can indicate the temperature grad by judging the color change and is widely used for off-line temperature measurement during aero engine test. Unfortunately, the hot gases flow within the engine during measuring always make the paint color degraded, which means a serious saturation reduction and contrast loss of the paint colors. This phenomenon makes it more difficult to interpret the thermal paint test results. Present contrast enhancement algorithms can significantly increase the image contrast but can't protect the hue feature of the paint images effectively, which always cause color shift. In this paper, we propose a color restoration method for thermal paint image. This method utilizes the atmospheric scattering model to restore the lost contrast and saturation information, so that the hue can be protected and the temperature can be precisely interpreted based on the image.

  • Action Recognition Using Low-Rank Sparse Representation

    Shilei CHENG  Song GU  Maoquan YE  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/24
      Page(s):
    830-834

    Human action recognition in videos draws huge research interests in computer vision. The Bag-of-Word model is quite commonly used to obtain the video level representations, however, BoW model roughly assigns each feature vector to its nearest visual word and the collection of unordered words ignores the interest points' spatial information, inevitably causing nontrivial quantization errors and impairing improvements on classification rates. To address these drawbacks, we propose an approach for action recognition by encoding spatio-temporal log Euclidean covariance matrix (ST-LECM) features within the low-rank and sparse representation framework. Motivated by low rank matrix recovery, local descriptors in a spatial temporal neighborhood have similar representation and should be approximately low rank. The learned coefficients can not only capture the global data structures, but also preserve consistent. Experimental results showed that the proposed approach yields excellent recognition performance on synthetic video datasets and are robust to action variability, view variations and partial occlusion.