The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E98-D No.12  (Publication Date:2015/12/01)

    Special Section on Parallel and Distributed Computing and Networking
  • FOREWORD Open Access

    Yasuhiko NAKASHIMA  

     
    FOREWORD

      Page(s):
    2047-2047
  • A Proposal of Access Point Selection Method Based on Cooperative Movement of Both Access Points and Users

    Ryo HAMAMOTO  Tutomu MURASE  Chisa TAKANO  Hiroyasu OBATA  Kenji ISHIDA  

     
    PAPER-Wireless System

      Pubricized:
    2015/09/15
      Page(s):
    2048-2059

    In recent times, wireless Local Area Networks (wireless LANs) based on the IEEE 802.11 standard have been spreading rapidly, and connecting to the Internet using wireless LANs has become more common. In addition, public wireless LAN service areas, such as train stations, hotels, and airports, are increasing and tethering technology has enabled smartphones to act as access points (APs). Consequently, there can be multiple APs in the same area. In this situation, users must select one of many APs. Various studies have proposed and evaluated many AP selection methods; however, existing methods do not consider AP mobility. In this paper, we propose an AP selection method based on cooperation among APs and user movement. Moreover, we demonstrate that the proposed method dramatically improves throughput compared to an existing method.

  • SP-MAC: A Media Access Control Method Based on the Synchronization Phenomena of Coupled Oscillators over WLAN

    Hiroyasu OBATA  Ryo HAMAMOTO  Chisa TAKANO  Kenji ISHIDA  

     
    PAPER-Wireless System

      Pubricized:
    2015/09/15
      Page(s):
    2060-2070

    Wireless local area networks (LANs) based on the IEEE802.11 standard usually use carrier sense multiple access with collision avoidance (CSMA/CA) for media access control. However, in CSMA/CA, if the number of wireless terminals increases, the back-off time derived by the initial contention window (CW) tends to conflict among wireless terminals. Consequently, a data frame collision often occurs, which sometimes causes the degradation of the total throughput in the transport layer protocols. In this study, to improve the total throughput, we propose a new media access control method, SP-MAC, which is based on the synchronization phenomena of coupled oscillators. Moreover, this study shows that SP-MAC drastically decreases the data frame collision probability and improves the total throughput when compared with the original CSMA/CA method.

  • GA-MAP: An Error Tolerant Address Mapping Method in Data Center Networks Based on Improved Genetic Algorithm

    Gang DENG  Hong WANG  Zhenghu GONG  Lin CHEN  Xu ZHOU  

     
    PAPER-Network

      Pubricized:
    2015/09/15
      Page(s):
    2071-2081

    Address configuration is a key problem in data center networks. The core issue of automatic address configuration is assigning logical addresses to the physical network according to a blueprint, namely logical-to-device ID mapping, which can be formulated as a graph isomorphic problem and is hard. Recently years, some work has been proposed for this problem, such as DAC and ETAC. DAC adopts a sub-graph isomorphic algorithm. By leveraging the structure characteristic of data center network, DAC can finish the mapping process quickly when there is no malfunction. However, in the presence of any malfunctions, DAC need human effort to correct these malfunctions and thus is time-consuming. ETAC improves on DAC and can finish mapping even in the presence of malfunctions. However, ETAC also suffers from some robustness and efficiency problems. In this paper, we present GA-MAP, a data center networks address mapping algorithm based on genetic algorithm. By intelligently leveraging the structure characteristic of data center networks and the global search characteristic of genetic algorithm, GA-MAP can solve the address mapping problem quickly. Moreover, GA-MAP can even finish address mapping when physical network involved in malfunctions, making it more robust than ETAC. We evaluate GA-MAP via extensive simulation in several of aspects, including computation time, error-tolerance, convergence characteristic and the influence of population size. The simulation results demonstrate that GA-MAP is effective for data center addresses mapping.

  • Survivability Analysis of VM-Based Intrusion Tolerant Systems

    Junjun ZHENG  Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER-Network

      Pubricized:
    2015/09/15
      Page(s):
    2082-2090

    Survivability is the capability of a system to provide its services in a timely manner even after intrusion and compromise occur. In this paper, we focus on the quantitative analysis of survivability of virtual machine (VM) based intrusion tolerant system in the presence of Byzantine failures due to malicious attacks. Intrusion tolerant system has the ability of a system to continuously provide correct services even if the system is intruded. This paper introduces a scheme of the intrusion tolerant system with virtualization, and derives the success probability for one request by a Markov chain under the environment where VMs have been intruded due to a security hole by malicious attacks. Finally, in numerical experiments, we evaluate the performance of VM-based intrusion tolerant system from the viewpoint of survivability.

  • Modeling and Testing of Network Protocols with Parallel State Machines

    Xia YIN  Jiangyuan YAO  Zhiliang WANG  Xingang SHI  Jun BI  Jianping WU  

     
    PAPER-Network

      Pubricized:
    2015/09/15
      Page(s):
    2091-2104

    The researches on model-based testing mainly focus on the models with single component, such as FSM and EFSM. For the network protocols which have multiple components communicating with messages, CFSM is a widely accepted solution. But in some network protocols, parallel and data-shared components maybe exist in the same network entity. It is infeasible to precisely specify such protocol by existing models. In this paper we present a new model, Parallel Parameterized Extended Finite State Machine (PaP-EFSM). A protocol system can be modeled with a group of PaP-EFSMs. The PaP-EFSMs work in parallel and they can read external variables form each other. We present a 2-stage test generation approach for our new models. Firstly, we generate test sequences for internal variables of each machine. They may be non-executable due to external variables. Secondly, we process the external variables. We make the sequences for internal variables executable and generate more test sequences for external variables. For validation, we apply this method to the conformance testing of real-life protocols. The devices from different vendors are tested and implementation faults are exposed.

  • Novel High Performance Scheduling Algorithms for Crosspoint Buffered Crossbar Switches

    Xiaoting WANG  Yiwen WANG  Shichao LI  Ping LI  

     
    PAPER-Switching System

      Pubricized:
    2015/09/15
      Page(s):
    2105-2115

    The crossbar-based switch fabric is widely used in today's high performance switches, due to its internally nonblocking and simply implementation properties. Usually there are two main switching architectures for crossbar-based switch fabric: internally bufferless crossbar switch and crosspoint buffered crossbar switch. As internally bufferless crossbar switch requires a complex centralized scheduler which limits its scalability to high speeds, crosspoint buffered crossbar switch has gained more attention because of its simpler distributed scheduling algorithm and better switching performance. However, almost all the scheduling algorithms proposed previously for crosspoint buffered crossbar switch either have unsatisfactory scheduling performance under non-uniform traffic patterns or show poor service fairness between input traffic flows. In order to overcome the disadvantages of existing algorithms, in this paper we propose two novel high performance scheduling algorithms named MCQF_RR and IMCQF_RR for crosspoint buffered crossbar switches. Both algorithms have a time complexity of O(log N), where N is the number of input/output ports of the switch. MCQF_RR takes advantage of the combined weight information about queue length and service waiting time of input queues to perform scheduling. In order to further reduce the scheduling complexity and make it feasible for high speed switches, IMCQF_RR uses the compressed queue length information instead of original queue length information to schedule cells in input VOQs. Simulation results show that our novel scheduling algorithms MCQF_RR and IMCQF_RR can demonstrate excellent delay performance comparable to existing high performance scheduling algorithms under both uniform and non-uniform traffic patterns, while maintain good service fairness performance under severe non-uniform traffic patterns.

  • The Fault-Tolerant Hamiltonian Problems of Crossed Cubes with Path Faults

    Hon-Chan CHEN  Tzu-Liang KUNG  Yun-Hao ZOU  Hsin-Wei MAO  

     
    PAPER-Switching System

      Pubricized:
    2015/09/15
      Page(s):
    2116-2122

    In this paper, we investigate the fault-tolerant Hamiltonian problems of crossed cubes with a faulty path. More precisely, let P denote any path in an n-dimensional crossed cube CQn for n ≥ 5, and let V(P) be the vertex set of P. We show that CQn-V(P) is Hamiltonian if |V(P)|n and is Hamiltonian connected if |V(P)|n-1. Compared with the previous results showing that the crossed cube is (n-2)-fault-tolerant Hamiltonian and (n-3)-fault-tolerant Hamiltonian connected for arbitrary faults, the contribution of this paper indicates that the crossed cube can tolerate more faulty vertices if these vertices happen to form some specific types of structures.

  • Failure Detection in P2P-Grid System

    Huan WANG  Hideroni NAKAZATO  

     
    PAPER-Grid System

      Pubricized:
    2015/09/15
      Page(s):
    2123-2131

    Peer-to-peer (P2P)-Grid systems are being investigated as a platform for converging the Grid and P2P network in the construction of large-scale distributed applications. The highly dynamic nature of P2P-Grid systems greatly affects the execution of the distributed program. Uncertainty caused by arbitrary node failure and departure significantly affects the availability of computing resources and system performance. Checkpoint-and-restart is the most common scheme for fault tolerance because it periodically saves the execution progress onto stable storage. In this paper, we suggest a checkpoint-and-restart mechanism as a fault-tolerant method for applications on P2P-Grid systems. Failure detection mechanism is a necessary prerequisite to fault tolerance and fault recovery in general. Given the highly dynamic nature of nodes within P2P-Grid systems, any failure should be detected to ensure effective task execution. Therefore, failure detection mechanism as an integral part of P2P-Grid systems was studied. We discussed how the design of various failure detection algorithms affects their performance in average failure detection time of nodes. Numerical analysis results and implementation evaluation are also provided to show different average failure detection times in real systems for various failure detection algorithms. The comparison shows the shortest average failure detection time by 8.8s on basis of the WP failure detector. Our lowest mean time to recovery (MTTR) is also proven to have a distinct advantage with a time consumption reduction of about 5.5s over its counterparts.

  • Dynamic Job Scheduling Method Based on Expected Probability of Completion of Voting in Volunteer Computing

    Yuto MIYAKOSHI  Shinya YASUDA  Kan WATANABE  Masaru FUKUSHI  Yasuyuki NOGAMI  

     
    PAPER-Grid System

      Pubricized:
    2015/09/15
      Page(s):
    2132-2140

    This paper addresses the problem of job scheduling in volunteer computing (VC) systems where each computation job is replicated and allocated to multiple participants (workers) to remove incorrect results by a voting mechanism. In the job scheduling of VC, the number of workers to complete a job is an important factor for the system performance; however, it cannot be fixed because some of the workers may secede in real VC. This is the problem that existing methods have not considered in the job scheduling. We propose a dynamic job scheduling method which considers the expected probability of completion (EPC) for each job based on the probability of worker's secession. The key idea of the proposed method is to allocate jobs so that EPC is always greater than a specified value (SPC). By setting SPC as a reasonable value, the proposed method enables to complete jobs without excess allocation, which leads to the higher performance of VC systems. We assume in this paper that worker's secession probability follows Weibull-distribution which is known to reflect more practical situation. We derive parameters for the distribution using actual trace data and compare the performance of the proposed and the previous method under the Weibull-distribution model, as well as the previous constant probability model. Simulation results show that the performance of the proposed method is up to 5 times higher than that of the existing method especially when the time for completing jobs is restricted, while keeping the error rate lower than a required value.

  • Performance Evaluation of a 3D-Stencil Library for Distributed Memory Array Accelerators

    Yoshikazu INAGAKI  Shinya TAKAMAEDA-YAMAZAKI  Jun YAO  Yasuhiko NAKASHIMA  

     
    PAPER-Architecture

      Pubricized:
    2015/09/15
      Page(s):
    2141-2149

    The Energy-aware Multi-mode Accelerator eXtension [24],[25] (EMAX) is equipped with distributed single-port local memories and ring-formed interconnections. The accelerator is designed to achieve extremely high throughput for scientific computations, big data, and image processing as well as low-power consumption. However, before mapping algorithms on the accelerator, application developers require sufficient knowledge of the hardware organization and specially designed instructions. They also need significant effort to tune the code for improving execution efficiency when no well-designed compiler or library is available. To address this problem, we focus on library support for stencil (nearest-neighbor) computations that represent a class of algorithms commonly used in many partial differential equation (PDE) solvers. In this research, we address the following topics: (1) system configuration, features, and mnemonics of EMAX; (2) instruction mapping techniques that reduce the amount of data to be read from the main memory; (3) performance evaluation of the library for PDE solvers. With the features of a library that can reuse the local data across the outer loop iterations and map many instructions by unrolling the outer loops, the amount of data to be read from the main memory is significantly reduced to a minimum of 1/7 compared with a hand-tuned code. In addition, the stencil library reduced the execution time 23% more than a general-purpose processor.

  • Ultrasmall: A Tiny Soft Processor Architecture with Multi-Bit Serial Datapaths for FPGAs

    Shinya TAKAMAEDA-YAMAZAKI  Hiroshi NAKATSUKA  Yuichiro TANAKA  Kenji KISE  

     
    PAPER-Architecture

      Pubricized:
    2015/09/15
      Page(s):
    2150-2158

    Soft processors are widely used in FPGA-based embedded computing systems. For such purposes, efficiency in resource utilization is as important as high performance. This paper proposes Ultrasmall, a new soft processor architecture for FPGAs. Ultrasmall supports a subset of the MIPS-I instruction set architecture and employs an area efficient microarchitecture to reduce the use of FPGA resources. While supporting the original 32-bit ISA, Ultrasmall uses a 2-bit serial ALU for all of its operations. This approach significantly reduces the resource utilization instead of increasing the performance overheads. In addition to these device-independent optimizations, we applied several device-dependent optimizations for Xilinx Spartan-3E FPGAs using 4-input lookup tables (LUTs). Optimizations using specific primitives aggressively reduce the number of occupied slices. Our evaluation result shows that Ultrasmall occupies only 84% of the previous small soft processor. In addition to the utilized resource reduction, Ultrasmall achieves 2.9 times higher performance than the previous approach.

  • Postcopy Live Migration with Guest-Cooperative Page Faults

    Takahiro HIROFUCHI  Isaku YAMAHATA  Satoshi ITOH  

     
    PAPER-Operating System

      Pubricized:
    2015/09/15
      Page(s):
    2159-2167

    Postcopy live migration is a promising alternative of virtual machine (VM) migration, which transfers memory pages after switching the execution host of a VM. It allows a shorter and more deterministic migration time than precopy migration. There is, however, a possibility that postcopy migration would degrade VM performance just after switching the execution host. In this paper, we propose a performance improvement technique of postcopy migration, extending the para-virtualized page fault mechanism of a virtual machine monitor. When the guest operating system accesses a not-yet-transferred memory page, our proposed mechanism allows the guest kernel to defer the execution of the current process until the page data is transferred. In parallel with the page transfer, the guest kernel can yield VCPU to other active processes. We implemented the proposed technique in our postcopy migration mechanism for Qemu/KVM. Through experiments, we confirmed that our technique successfully alleviated performance degradation of postcopy migration for web server and database benchmarks.

  • A Flexible Direct Attached Storage for a Data Intensive Application

    Takatsugu ONO  Yotaro KONISHI  Teruo TANIMOTO  Noboru IWAMATSU  Takashi MIYOSHI  Jun TANAKA  

     
    PAPER-Storage System

      Pubricized:
    2015/09/15
      Page(s):
    2168-2177

    Big data analysis and a data storing applications require a huge volume of storage and a high I/O performance. Applications can achieve high level of performance and cost efficiency by exploiting the high I/O performance of direct attached storages (DAS) such as internal HDDs. With the size of stored data ever increasing, it will be difficult to replace servers since internal HDDs contain huge amounts of data. Generally, the data is copied via Ethernet when transferring the data from the internal HDDs to the new server. However, the amount of data will continue to rapidly increase, and thus, it will be hard to make these types of transfers through the Ethernet since it will take a long time. A storage area network such as iSCSI can be used to avoid this problem because the data can be shared with the servers. However, this decreases the level of performance and increases the costs. Improving the flexibility without incurring I/O performance degradation is required in order to improve the DAS architecture. In response to this issue, we propose FlexDAS, which improves the flexibility of direct attached storage by using a disk area network (DAN) without degradation the I/O performance. A resource manager connects or disconnects the computation nodes to the HDDs via the FlexDAS switch, which supports the SAS or SATA protocols. This function enables for the servers to be replaced in a short period of time. We developed a prototype FlexDAS switch and quantitatively evaluated the architecture. Results show that the FlexDAS switch can disconnect and connect the HDD to the server in just 1.16 seconds. We also confirmed that the FlexDAS improves the performance of the data intensive applications by up to 2.84 times compared with the iSCSI.

  • A Light-Weight Rollback Mechanism for Testing Kernel Variants in Auto-Tuning

    Shoichi HIRASAWA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  

     
    PAPER-Software

      Pubricized:
    2015/09/15
      Page(s):
    2178-2186

    Automatic performance tuning of a practical application could be time-consuming and sometimes infeasible, because it often needs to evaluate the performances of a large number of code variants to find the best one. In this paper, hence, a light-weight rollback mechanism is proposed to evaluate each of code variants at a low cost. In the proposed mechanism, once one code variant of a target code block is executed, the execution state is rolled back to the previous state of not yet executing the block so as to repeatedly execute only the block to find the best code variant. It also has a feature of terminating a code variant whose execution time is longer than the shortest execution time so far. As a result, it can prevent executing the whole application many times and thus reduces the timing overhead of an auto-tuning process required for finding the best code variant.

  • Fast Control Method of Software-Managed TLB for Reducing Zero-Copy Communication Overhead

    Toshihiro YAMAUCHI  Masahiro TSURUYA  Hideo TANIGUCHI  

     
    LETTER-Operating System

      Pubricized:
    2015/09/15
      Page(s):
    2187-2191

    Microkernel operating systems (OSes) use zero-copy communication to reduce the overhead of copying transfer data, because the communication between OS servers occurs frequently in the case of microkernel OSes. However, when a memory management unit manages the translation lookaside buffer (TLB) using software, TLB misses tend to increase the overhead of interprocess communication (IPC) between OS servers running on a microkernel OS. Thus, improving the control method of a software-managed TLB is important for microkernel OSes. This paper proposes a fast control method of software-managed TLB that manages page attachment in the area used for IPC by using TLB entries, instead of page tables. Consequently, TLB misses can be avoided in the area, and the performance of IPC improves. Thus, taking the SH-4 processor as an example of a processor having a software-managed TLB, this paper describes the design and the implementation of the proposed method for AnT operating system, and reports the evaluation results of the proposed method.

  • Parallel Geospatial Raster Data I/O Using File View

    Wei XIONG  Ye WU  Luo CHEN  Ning JING  

     
    LETTER-Storage System

      Pubricized:
    2015/09/15
      Page(s):
    2192-2195

    The challenges of providing a divide-and-conquer strategy for tackling large geospatial raster data input/output (I/O) are longstanding. Solutions need to change with advances in the technology and hardware. After analyzing the reason for the problems of traditional parallel raster I/O mode, a parallel I/O strategy using file view is proposed to solve these problems. Message Passing Interface I/O (MPI-IO) is used to implement this strategy. Experimental results show how a file view approach can be effectively married to General Parallel File System (GPFS). A suitable file view setting provides an efficient solution to parallel geospatial raster data I/O.

  • Repeatable Hybrid Parallel Implementation of an Inverse Matrix Computation Using the SMW Formula for a Time-Series Simulation

    Yuta MATSUI  Shinji FUKUMA  Shin-ichiro MORI  

     
    LETTER-Software

      Pubricized:
    2015/09/15
      Page(s):
    2196-2198

    In this paper, the repeatable hybrid parallel implementation of inverse matrix computation using SMW formula is proposed. The authors' had previously proposed a hybrid parallel algorithm for inverse matrix computation. It is reasonably fast for a one time computation of an inverse matrix, but it is hard to apply this algorithm repeatedly for consecutive computations since the relocation of the large matrix is required at the beginning of each iterations. In order to eliminate the relocation of the large input matrix which is the output of the inverse matrix computation from the previous time step, the computation algorithm has been redesigned so that the required portion of the input matrix becomes the same as the output portion of the previously computed matrix in each node. This makes it possible to repeatedly and efficiently apply the SMW formula to compute inverse matrix in a time-series simulation.

  • Regular Section
  • A Note on Harmonious Coloring of Caterpillars

    Asahi TAKAOKA  Shingo OKUMA  Satoshi TAYU  Shuichi UENO  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2015/08/28
      Page(s):
    2199-2206

    The harmonious coloring of an undirected simple graph is a vertex coloring such that adjacent vertices are assigned different colors and each pair of colors appears together on at most one edge. The harmonious chromatic number of a graph is the least number of colors used in such a coloring. The harmonious chromatic number of a path is known, whereas the problem to find the harmonious chromatic number is NP-hard even for trees with pathwidth at most 2. Hence, we consider the harmonious coloring of trees with pathwidth 1, which are also known as caterpillars. This paper shows the harmonious chromatic number of a caterpillar with at most one vertex of degree more than 2. We also show the upper bound of the harmonious chromatic number of a 3-regular caterpillar.

  • Design and Evaluation of a Configurable Query Processing Hardware for Data Streams

    Yasin OGE  Masato YOSHIMI  Takefumi MIYOSHI  Hideyuki KAWASHIMA  Hidetsugu IRIE  Tsutomu YOSHINAGA  

     
    PAPER-Computer System

      Pubricized:
    2015/09/14
      Page(s):
    2207-2217

    In this paper, we propose Configurable Query Processing Hardware (CQPH), an FPGA-based accelerator for continuous query processing over data streams. CQPH is a highly optimized and minimal-overhead execution engine designed to deliver real-time response for high-volume data streams. Unlike most of the other FPGA-based approaches, CQPH provides on-the-fly configurability for multiple queries with its own dynamic configuration mechanism. With a dedicated query compiler, SQL-like queries can be easily configured into CQPH at run time. CQPH supports continuous queries including selection, group-by operation and sliding-window aggregation with a large number of overlapping sliding windows. As a proof of concept, a prototype of CQPH is implemented on an FPGA platform for a case study. Evaluation results indicate that a given query can be configured within just a few microseconds, and the prototype implementation of CQPH can process over 150 million tuples per second with a latency of less than a microsecond. Results also indicate that CQPH provides linear scalability to increase its flexibility (i.e., on-the-fly configurability) without sacrificing performance (i.e., maximum allowable clock speed).

  • Lines of Comments as a Noteworthy Metric for Analyzing Fault-Proneness in Methods

    Hirohisa AMAN  Sousuke AMASAKI  Takashi SASAKI  Minoru KAWAHARA  

     
    PAPER-Software Engineering

      Pubricized:
    2015/09/04
      Page(s):
    2218-2228

    This paper focuses on the power of comments to predict fault-prone programs. In general, comments along with executable statements enhance the understandability of programs. However, comments may also be used to mask the lack of readability in the program, therefore well-written comments are referred to as “deodorant to mask code smells” in the field of code refactoring. This paper conducts an empirical analysis to examine whether Lines of Comments (LCM) written inside a method's body is a noteworthy metric for analyzing fault-proneness in Java methods. The empirical results show the following two findings: (1) more-commented methods (the methods having more comments than the amount estimated by size and complexity of the methods) are about 1.6 - 2.8 times more likely to be faulty than the others, and (2) LCM can be a useful factor in fault-prone method prediction models along with the method size and the method complexity.

  • Development and Evaluation of Near Real-Time Automated System for Measuring Consumption of Seasonings

    Kazuaki NAKAMURA  Takuya FUNATOMI  Atsushi HASHIMOTO  Mayumi UEDA  Michihiko MINOH  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2015/09/07
      Page(s):
    2229-2241

    The amount of seasonings used during food preparation is quite important information for modern people to enable them to cook delicious dishes as well as to take care for their health. In this paper, we propose a near real-time automated system for measuring and recording the amount of seasonings used during food preparation. Our proposed system is equipped with two devices: electronic scales and a camera. Seasoning bottles are basically placed on the electronic scales in the proposed system, and the scales continually measure the total weight of the bottles placed on them. When a chef uses a certain seasoning, he/she first picks up the bottle containing it from the scales, then adds the seasoning to a dish, and then returns the bottle to the scales. In this process, the chef's picking and returning actions are monitored by the camera. The consumed amount of each seasoning is calculated as the difference in weight between before and after it is used. We evaluated the performance of the proposed system with experiments in 301 trials in actual food preparation performed by seven participants. The results revealed that our system successfully measured the consumption of seasonings in 60.1% of all the trials.

  • A Novel Earthquake Education System Based on Virtual Reality

    Xiaoli GONG  Yanjun LIU  Yang JIAO  Baoji WANG  Jianchao ZHOU  Haiyang YU  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2015/09/16
      Page(s):
    2242-2249

    An earthquake is a destructive natural disaster, which cannot be predicted accurately and causes devastating damage and losses. In fact, many of the damages can be prevented if people know what to do during and after earthquakes. Earthquake education is the most important method to raise public awareness and mitigate the damage caused by earthquakes. Generally, earthquake education consists of conducting traditional earthquake drills in schools or communities and experiencing an earthquake through the use of an earthquake simulator. However, these approaches are unrealistic or expensive to apply, especially in underdeveloped areas where earthquakes occur frequently. In this paper, an earthquake drill simulation system based on virtual reality (VR) technology is proposed. A User is immersed in a 3D virtual earthquake environment through a head mounted display and is able to control the avatar in a virtual scene via Kinect to respond to the simulated earthquake environment generated by SIGVerse, a simulation platform. It is a cost effective solution and is easy to deploy. The design and implementation of this VR system is proposed and a dormitory earthquake simulation is conducted. Results show that powerful earthquakes can be simulated successfully and the VR technology can be applied in the earthquake drills.

  • Multi-Feature Guided Brain Tumor Segmentation Based on Magnetic Resonance Images

    Ye AI  Feng MIAO  Qingmao HU  Weifeng LI  

     
    PAPER-Pattern Recognition

      Pubricized:
    2015/08/25
      Page(s):
    2250-2256

    In this paper, a novel method of high-grade brain tumor segmentation from multi-sequence magnetic resonance images is presented. Firstly, a Gaussian mixture model (GMM) is introduced to derive an initial posterior probability by fitting the fluid attenuation inversion recovery histogram. Secondly, some grayscale and region properties are extracted from different sequences. Thirdly, grayscale and region characteristics with different weights are proposed to adjust the posterior probability. Finally, a cost function based on the posterior probability and neighborhood information is formulated and optimized via graph cut. Experiment results on a public dataset with 20 high-grade brain tumor patient images show the proposed method could achieve a dice coefficient of 78%, which is higher than the standard graph cut algorithm without a probability-adjusting step or some other cost function-based methods.

  • Multiple-Shot People Re-Identification by Patch-Wise Learning

    Guanwen ZHANG  Jien KATO  Yu WANG  Kenji MASE  

     
    PAPER-Pattern Recognition

      Pubricized:
    2015/08/31
      Page(s):
    2257-2270

    In this paper, we propose a patch-wise learning based approach to deal with the multiple-shot people re-identification task. In the proposed approach, re-identification is formulated as a patch-wise set-to-set matching problem, with each patch set being matched using a specifically learned Mahalanobis distance metric. The proposed approach has two advantages: (1) a patch-wise representation that moderates the ambiguousness of a non-rigid matching problem (of human body) to an approximate rigid one (of body parts); (2) a patch-wise learning algorithm that enables more constraints to be included in the learning process and results in distance metrics of high quality. We evaluate the proposed approach on popular benchmark datasets and confirm its competitive performance compared to the state-of-the-art methods.

  • Speech Recognition of English by Japanese Using Lexicon Represented by Multiple Reduced Phoneme Sets

    Xiaoyun WANG  Seiichi YAMAMOTO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2015/09/10
      Page(s):
    2271-2279

    Recognition of second language (L2) speech is still a challenging task even for state-of-the-art automatic speech recognition (ASR) systems, partly because pronunciation by L2 speakers is usually significantly influenced by the mother tongue of the speakers. The authors previously proposed using a reduced phoneme set (RPS) instead of the canonical one of L2 when the mother tongue of speakers is known, and demonstrated that this reduced phoneme set improved the recognition performance through experiments using English utterances spoken by Japanese. However, the proficiency of L2 speakers varies widely, as does the influence of the mother tongue on their pronunciation. As a result, the effect of the reduced phoneme set is different depending on the speakers' proficiency in L2. In this paper, the authors examine the relation between proficiency of speakers and a reduced phoneme set customized for them. The experimental results are then used as the basis of a novel speech recognition method using a lexicon in which the pronunciation of each lexical item is represented by multiple reduced phoneme sets, and the implementation of a language model most suitable for that lexicon is described. Experimental results demonstrate the high validity of the proposed method.

  • F0 Parameterization of Glottalized Tones in HMM-Based Speech Synthesis for Hanoi Vietnamese

    Duy Khanh NINH  Yoichi YAMASHITA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2015/09/07
      Page(s):
    2280-2289

    A conventional HMM-based speech synthesis system for Hanoi Vietnamese often suffers from hoarse quality due to incomplete F0 parameterization of glottalized tones. Since estimating F0 from glottalized waveform is rather problematic for usual F0 extractors, we propose a pitch marking algorithm where pitch marks are propagated from regular regions of a speech signal to glottalized ones, from which complete F0 contours for the glottalized tones are derived. The proposed F0 parameterization scheme was confirmed to significantly reduce the hoarseness whilst slightly improving the tone naturalness of synthetic speech by both objective and listening tests. The pitch marking algorithm works as a refinement step based on the results of an F0 extractor. Therefore, the proposed scheme can be combined with any F0 extractor.

  • Moiré Reduction Using Inflection Point and Color Variation in Digital Camera of No Optical Low Pass Filter

    Dae-Chul KIM  Wang-Jun KYUNG  Ho-Gun HA  Yeong-Ho HA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/09/10
      Page(s):
    2290-2298

    The role of an optical low-pass filter (OLPF) in a digital still camera is to remove the high spatial frequencies that cause aliasing, thereby enhancing the image quality. However, this also causes some loss of detail. Yet, when an image is captured without the OLPF, moiré generally appears in the high spatial frequency region of the image. Accordingly, this paper presents a moiré reduction method that allows omission of the OLPF. Since most digital still cameras use a CCD or a CMOS with a Bayer pattern, moiré patterns and color artifacts are simultaneously induced by aliasing at high spatial frequencies. Therefore, in this study, moiré reduction is performed in both the luminance channel to remove the moiré patterns and the color channel to reduce color smearing. To detect the moiré patterns, the spatial frequency response (SFR) of the camera is first analyzed. The moiré regions are identified using patterns related to the SFR of the camera and then analyzed in the frequency domain. The moiré patterns are reduced by removing their frequency components, represented by the inflection point between the high-frequency and DC components in the moiré region. To reduce the color smearing, color changing regions are detected using the color variation ratios for the RGB channels and then corrected by multiplying with the average surrounding colors. Experiments confirm that the proposed method is able to reduce the moiré in both the luminance and color channels, while also preserving the detail.

  • Utilizing Attributed Graph Representation in Object Detection and Tracking for Indoor Range Sensor Surveillance Cameras

    Houari SABIRIN  Hiroshi SANKOH  Sei NAITO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/09/10
      Page(s):
    2299-2307

    The problem of identifying moving objects in a video recording produced by a range sensor camera is due to the limited information available for classifying different objects. On the other hand, the infrared signal from a range sensor camera is more robust for extreme luminance intensity when the monitored area has light conditions that are too bright or too dark. This paper proposes a method of detection and tracking moving objects in image sequences captured by stationary range sensor cameras. Here, the depth information is utilized to correctly identify each of detected objects. Firstly, camera calibration and background subtraction are performed to separate the background from the moving objects. Next, a 2D projection mapping is performed to obtain the location and contour of the objects in the 2D plane. Based on this information, graph matching is performed based on features extracted from the 2D data, namely object position, size and the behavior of the objects. By observing the changes in the number of objects and the objects' position relative to each other, similarity matching is performed to track the objects in the temporal domain. Experimental results show that by using similarity matching, object identification can be correctly achieved even during occlusion.

  • Top-Down Visual Attention Estimation Using Spatially Localized Activation Based on Linear Separability of Visual Features

    Takatsugu HIRAYAMA  Toshiya OHIRA  Kenji MASE  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/09/10
      Page(s):
    2308-2316

    Intelligent information systems captivate people's attention. Examples of such systems include driving support vehicles capable of sensing driver state and communication robots capable of interacting with humans. Modeling how people search visual information is indispensable for designing these kinds of systems. In this paper, we focus on human visual attention, which is closely related to visual search behavior. We propose a computational model to estimate human visual attention while carrying out a visual target search task. Existing models estimate visual attention using the ratio between a representative value of visual feature of a target stimulus and that of distractors or background. The models, however, can not often achieve a better performance for difficult search tasks that require a sequentially spotlighting process. For such tasks, the linear separability effect of a visual feature distribution should be considered. Hence, we introduce this effect to spatially localized activation. Concretely, our top-down model estimates target-specific visual attention using Fisher's variance ratio between a visual feature distribution of a local region in the field of view and that of a target stimulus. We confirm the effectiveness of our computational model through a visual search experiment.

  • Application Prefetcher Design Using both I/O Reordering and I/O Interleaving

    Yongsoo JOO  Sangsoo PARK  Hyokyung BAHN  

     
    LETTER-Computer System

      Pubricized:
    2015/08/20
      Page(s):
    2317-2321

    Application prefetchers improve application launch performance on HDDs through either I/O reordering or I/O interleaving, but there has been no proposal to combine the two techniques. We present a new algorithm to combine both approaches, and demonstrate that it reduces cold start launch time by 50%.

  • An Empirical Study of Bugs in Industrial Financial Systems

    Xiao XUAN  Xiaoqiong ZHAO  Ye WANG  Shanping LI  

     
    LETTER-Software Engineering

      Pubricized:
    2015/09/15
      Page(s):
    2322-2327

    Bugs in industrial financial systems have not been extensively studied. To address this gap, we focused on the empirical study of bugs in three systems, PMS, β-Analyzer, and OrderPro. Results showed the 3 most common types of bugs in industrial financial systems to be internal interface (19.00%), algorithm/method (17.67%), and logic (15.00%).

  • SimCS: An Effective Method to Compute Similarity of Scientific Papers Based on Contribution Scores

    Masoud REYHANI HAMEDANI  Sang-Wook KIM  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2015/09/14
      Page(s):
    2328-2332

    In this paper, we propose SimCS (similarity based on contribution scores) to compute the similarity of scientific papers. For similarity computation, we exploit a notion of a contribution score that indicates how much a paper contributes to another paper citing it. Also, we consider the author dominance of papers in computing contribution scores. We perform extensive experiments with a real-world dataset to show the superiority of SimCS. In comparison with SimCC, the-state-of-the-art method, SimCS not only requires no extra parameter tuning but also shows higher accuracy in similarity computation.

  • Enhancing IEEE 802.15.4-Based Wireless Networks to Handle Loss of Beacon Frames

    Jeongyeup PAEK  Byung-Seo KIM  

     
    LETTER-Information Network

      Pubricized:
    2015/08/31
      Page(s):
    2333-2336

    Even though the IEEE 802.15.4 standard defines processes for handling the loss of beacon frames in beacon-enabled low-rate wireless personal area networks (LR-WPANs), they are not efficient nor detailed. This letter proposes an enhanced process to improve the throughput performance of LR-WPANs under the losses of beacon frames. The key idea of our proposed enhancement is to make devices that have not received a beacon frame, due to packet loss, to transmit their data in the contention period and even in the inactive period instead of holding pending frames during the whole superframe period. The proposed protocol is evaluated using mathematical analysis as well as simulations, and the throughput improvement of LR-WPANs is proved.

  • Common and Adapted Vocabularies for Face Verification

    Shuoyan LIU  Kai FANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/09/18
      Page(s):
    2337-2340

    Face verification in the presence of age progression is an important problem that has not been widely addressed. Despite appearance changes for same person due to aging, they are more similar compared to facial images from different individuals. Hence, we design common and adapted vocabularies, where common vocabulary describes contents of general population and adapted vocabulary represents specific characteristics of one of image facial pairs. And the other image is characterized with a concatenation histogram of common and adapted visual words counts, termed as “age-invariant distinctive representation”. The representation describes whether the image content is best modeled by the common vocabulary or the corresponding adapted vocabulary, which is further used to accomplish the face verification. The proposed approach is tested on the FGnet dataset and a collection of real-world facial images from identification card. The experimental results demonstrate the effectiveness of the proposed method for verification of identity at a modest computational cost.

  • Performance Enhancement of Cross-Talk Canceller for Four-Speaker System by Selective Speaker Operation

    Su-Jin CHOI  Jeong-Yong BOO  Ki-Jun KIM  Hochong PARK  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/08/25
      Page(s):
    2341-2344

    We propose a method of enhancing the performance of a cross-talk canceller for a four-speaker system with respect to sweet spot size and ringing effect. For the large sweet spot of a cross-talk canceller, the speaker layout needs to be symmetrical to the listener's position. In addition, a ringing effect of the cross-talk canceller is reduced when many speakers are located close to each other. Based on these properties, the proposed method first selects the two speakers in a four-speaker system that are most symmetrical to the target listener's position and then adds the remaining speakers between these two to the final selection. By operating only these selected speakers, the proposed method enlarges the sweet spot size and reduces the ringing effect. We conducted objective and subjective evaluations and verified that the proposed method improves the performance of the cross-talk canceller compared to the conventional method.

  • Supervised Denoising Pre-Training for Robust ASR with DNN-HMM

    Shin Jae KANG  Kang Hyun LEE  Nam Soo KIM  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/09/07
      Page(s):
    2345-2348

    In this letter, we propose a novel supervised pre-training technique for deep neural network (DNN)-hidden Markov model systems to achieve robust speech recognition in adverse environments. In the proposed approach, our aim is to initialize the DNN parameters such that they yield abstract features robust to acoustic environment variations. In order to achieve this, we first derive the abstract features from an early fine-tuned DNN model which is trained based on a clean speech database. By using the derived abstract features as the target values, the standard error back-propagation algorithm with the stochastic gradient descent method is performed to estimate the initial parameters of the DNN. The performance of the proposed algorithm was evaluated on Aurora-4 DB, and better results were observed compared to a number of conventional pre-training methods.

  • Using Correlated Regression Models to Calculate Cumulative Attributes for Age Estimation

    Lili PAN  Qiangsen HE  Yali ZHENG  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/08/28
      Page(s):
    2349-2352

    Facial age estimation requires accurately capturing the mapping relationship between facial features and corresponding ages, so as to precisely estimate ages for new input facial images. Previous works usually use one-layer regression model to learn this complex mapping relationship, resulting in low estimation accuracy. In this letter, we propose a new gender-specific regression model with a two-layer structure for more accurate age estimation. Different from recent two-layer models that use a global regressor to calculate cumulative attributes (CA) and use CA to estimate age, we use gender-specific ones to calculate CA with more flexibility and precision. Extensive experimental results on FG-NET and Morph 2 datasets demonstrate the superiority of our method over other state-of-the-art age estimation methods.

  • Dynamic Rendering Quality Scaling Based on Resolution Changes

    MinKyu KIM  SunHo KI  YoungDuke SEO  JinHong PARK  ChuShik JHON  

     
    LETTER-Computer Graphics

      Pubricized:
    2015/09/17
      Page(s):
    2353-2357

    Recently in the mobile graphic industry, ultra-realistic visual qualities with 60fps and limited power budget for GPU have been required. For graphics-heavy applications that run at 30 fps, we easily observed very noticeable flickering artifacts. Further, the workload imposed by high resolutions at high frame rates directly decreases the battery life. Unlike the recent frame rate up sampling algorithms which remedy the flickering but cause inevitable significant overheads to reconstruct intermediate frames, we propose a dynamic rendering quality scaling (DRQS) that includes dynamic rendering based on resolution changes and quality scaling to increase the frame rate with negligible overhead using a transform matrix. Further DRQS reduces the workload up to 32% without human visual-perceptual changes for graphics-light applications.