The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E101-D No.6  (Publication Date:2018/06/01)

    Special Section on Formal Approaches
  • FOREWORD Open Access

    Tatsuhiro TSUCHIYA  

     
    FOREWORD

      Page(s):
    1466-1466
  • Direct Update of XML Documents with Data Values Compressed by Tree Grammars

    Kenji HASHIMOTO  Ryunosuke TAKAYAMA  Hiroyuki SEKI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Page(s):
    1467-1478

    One of the most promising compression methods for XML documents is the one that translates a given document to a tree grammar that generates it. A feature of this compression is that the internal structures are kept in production rules of the grammar. This enables us to directly manipulate the tree structure without decompression. However, previous studies assume that a given XML document does not have data values because they focus on direct retrieval and manipulation of the tree structure. This paper proposes a direct update method for XML documents with data values and shows the effectiveness of the proposed method based on experiments conducted on our implemented tool.

  • Counting Algorithms for Recognizable and Algebraic Series

    Bao Trung CHU  Kenji HASHIMOTO  Hiroyuki SEKI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Page(s):
    1479-1490

    Formal series are a natural extension of formal languages by associating each word with a value called a coefficient or a weight. Among them, recognizable series and algebraic series can be regarded as extensions of regular languages and context-free languages, respectively. The coefficient of a word w can represent quantities such as the cost taken by an operation on w, the probability that w is emitted. One of the possible applications of formal series is the string counting in quantitative analysis of software. In this paper, we define the counting problems for formal series and propose algorithms for the problems. The membership problem for an automaton or a grammar corresponds to the problem of computing the coefficient of a given word in a given series. Accordingly, we define the counting problem for formal series in the following two ways. For a formal series S and a natural number d, we define CC(S,d) to be the sum of the coefficients of all the words of length d in S and SC(S,d) to be the number of words of length d that have non-zero coefficients in S. We show that for a given recognizable series S and a natural number d, CC(S,d) can be computed in O(η log d) time where η is an upper-bound of time needed for a single state-transition matrix operation, and if the state-transition matrices of S are commutative for multiplication, SC(S,d) can be computed in polynomial time of d. We extend the notions to tree series and discuss how to compute them efficiently. Also, we propose an algorithm that computes CC(S,d) in square time of d for an algebraic series S. We show the CPU time of the proposed algorithm for computing CC(S,d) for some context-free grammars as S, one of which represents the syntax of C language. To examine the applicability of the proposed algorithms to string counting for the vulnerability analysis, we also present results on string counting for Kaluza Benchmark.

  • Static Dependency Pair Method in Functional Programs

    Keiichirou KUSAKARI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Page(s):
    1491-1502

    We have previously introduced the static dependency pair method that proves termination by analyzing the static recursive structure of various extensions of term rewriting systems for handling higher-order functions. The key is to succeed with the formalization of recursive structures based on the notion of strong computability, which is introduced for the termination of typed λ-calculi. To bring the static dependency pair method close to existing functional programs, we also extend the method to term rewriting models in which functional abstractions with patterns are permitted. Since the static dependency pair method is not sound in general, we formulate a class; namely, accessibility, in which the method works well. The static dependency pair method is a very natural reasoning; therefore, our extension differs only slightly from previous results. On the other hand, a soundness proof is dramatically difficult.

  • Computational Complexity and Polynomial Time Procedure of Response Property Problem in Workflow Nets

    Muhammad Syafiq BIN AB MALEK  Mohd Anuaruddin BIN AHMADON  Shingo YAMAGUCHI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Page(s):
    1503-1510

    Response property is a kind of liveness property. Response property problem is defined as follows: Given two activities α and β, whenever α is executed, is β always executed after that? In this paper, we tackled the problem in terms of Workflow Petri nets (WF-nets for short). Our results are (i) the response property problem for acyclic WF-nets is decidable, (ii) the problem is intractable for acyclic asymmetric choice (AC) WF-nets, and (iii) the problem for acyclic bridge-less well-structured WF-nets is solvable in polynomial time. We illustrated the usefulness of the procedure with an application example.

  • Regular Section
  • Online Linear Optimization with the Log-Determinant Regularizer

    Ken-ichiro MORIDOMI  Kohei HATANO  Eiji TAKIMOTO  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/03/01
      Page(s):
    1511-1520

    We consider online linear optimization over symmetric positive semi-definite matrices, which has various applications including the online collaborative filtering. The problem is formulated as a repeated game between the algorithm and the adversary, where in each round t the algorithm and the adversary choose matrices Xt and Lt, respectively, and then the algorithm suffers a loss given by the Frobenius inner product of Xt and Lt. The goal of the algorithm is to minimize the cumulative loss. We can employ a standard framework called Follow the Regularized Leader (FTRL) for designing algorithms, where we need to choose an appropriate regularization function to obtain a good performance guarantee. We show that the log-determinant regularization works better than other popular regularization functions in the case where the loss matrices Lt are all sparse. Using this property, we show that our algorithm achieves an optimal performance guarantee for the online collaborative filtering. The technical contribution of the paper is to develop a new technique of deriving performance bounds by exploiting the property of strong convexity of the log-determinant with respect to the loss matrices, while in the previous analysis the strong convexity is defined with respect to a norm. Intuitively, skipping the norm analysis results in the improved bound. Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well.

  • Evaluation of Register Number Abstraction for Enhanced Instruction Register Files

    Naoki FUJIEDA  Kiyohiro SATO  Ryodai IWAMOTO  Shuichi ICHIKAWA  

     
    PAPER-Computer System

      Pubricized:
    2018/03/14
      Page(s):
    1521-1531

    Instruction set randomization (ISR) is a cost-effective obfuscation technique that modifies or enhances the relationship between instructions and machine languages. An Instruction Register File (IRF), a list of frequently used instructions, can be used for ISR by providing the way of indirect access to them. This study examines the IRF that integrates a positional register, which was proposed as a supplementary unit of the IRF, for the sake of tamper resistance. According to our evaluation, with a new design for the contents of the positional register, the measure of tamper resistance was increased by 8.2% at a maximum, which corresponds to a 32.2% increase in the size of the IRF. The number of logic elements increased by the addition of the positional register was 3.5% of its baseline processor.

  • Optimization of Body Biasing for Variable Pipelined Coarse-Grained Reconfigurable Architectures

    Takuya KOJIMA  Naoki ANDO  Hayate OKUHARA  Ng. Anh Vu DOAN  Hideharu AMANO  

     
    PAPER-Computer System

      Pubricized:
    2018/03/09
      Page(s):
    1532-1540

    Variable Pipeline Cool Mega Array (VPCMA) is a low power Coarse Grained Reconfigurable Architecture (CGRA) based on the concept of CMA (Cool Mega Array). It provides a pipeline structure in the PE array that can be configured so as to fit target algorithms and required performance. Also, VPCMA uses the Silicon On Thin Buried oxide (SOTB) technology, a type of Fully Depleted Silicon On Insulator (FDSOI), so it is possible to control its body bias voltage to provide a balance between performance and leakage power. In this paper, we study the optimization of the VPCMA body bias while considering simultaneously its variable pipeline structure. Through evaluations, we can observe that it is possible to achieve an average reduction of energy consumption, for the studied applications, of 17.75% and 10.49% when compared to respectively the zero bias (without body bias control) and the uniform (control of the whole PE array) cases, while respecting performance constraints. Besides, it is observed that, with appropriate body bias control, it is possible to extend the possible performance, hence enabling broader trade-off analyzes between consumption and performance. Considering the dynamic power as well as the static power, more appropriate pipeline structure and body bias voltage can be obtained. In addition, when the control of VDD is integrated, higher performance can be achieved with a steady increase of the power. These promising results show that applying an adequate optimization technique for the body bias control while simultaneously considering pipeline structures can not only enable further power reduction than previous methods, but also allow more trade-off analysis possibilities.

  • The Pre-Testing for Virtual Robot Development Environment

    Hyun Seung SON  R. Young Chul KIM  

     
    PAPER-Software Engineering

      Pubricized:
    2018/03/01
      Page(s):
    1541-1551

    The traditional tests are planned and designed at the early stages, but it is possible to execute test cases after implementing source code. Since there is a time difference between design stage and testing stage, by the time a software design error is found it will be too late. To solve this problem, this paper suggests a virtual pre-testing process. While the virtual pre-testing process can find software and testing errors before the developing stage, it can automatically generate and execute test cases with modeling and simulation (M&S) in a virtual environment. The first part of this method is to create test cases with state transition tree based on state diagram, which include state, transition, instruction pair, and all path coverage. The second part is to model and simulate a virtual target, which then pre-test the target with test cases. In other words, these generated test cases are automatically transformed into the event list. This simultaneously executes test cases to the simulated target within a virtual environment. As a result, it is possible to find the design and test error at the early stages of the development cycle and in turn can reduce development time and cost as much as possible.

  • Processing Multiple-User Location-Based Keyword Queries

    Yong WANG  Xiaoran DUAN  Xiaodong YANG  Yiquan ZHANG  Xiaosong ZHANG  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2018/03/01
      Page(s):
    1552-1561

    Geosocial networking allows users to interact with respect to their current locations, which enables a group of users to determine where to meet. This calls for techniques that support processing of Multiple-user Location-based Keyword (MULK) queries, which return a set of Point-of-Interests (POIs) that are 'close' to the locations of the users in a group and can provide them with potential options at the lowest expense (e.g., minimizing travel distance). In this paper, we formalize the MULK query and propose a dynamic programming-based algorithm to find the optimal result set. Further, we design two approximation algorithms to improve MULK query processing efficiency. The experimental evaluations show that our solutions are feasible and efficient under various parameter settings.

  • Data Augmented Dynamic Time Warping for Skeletal Action Classification

    Ju Yong CHANG  Yong Seok HEO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2018/03/01
      Page(s):
    1562-1571

    We present a new action classification method for skeletal sequence data. The proposed method is based on simple nonparametric feature matching without a learning process. We first augment the training dataset to implicitly construct an exponentially increasing number of training sequences, which can be used to improve the generalization power of the proposed action classifier. These augmented training sequences are matched to the test sequence with the relaxed dynamic time warping (DTW) technique. Our relaxed formulation allows the proposed method to work faster and with higher efficiency than the conventional DTW-based method using a non-augmented dataset. Experimental results show that the proposed approach produces effective action classification results for various scales of real datasets.

  • Pain Intensity Estimation Using Deep Spatiotemporal and Handcrafted Features

    Jinwei WANG  Huazhi SUN  

     
    PAPER-Pattern Recognition

      Pubricized:
    2018/03/12
      Page(s):
    1572-1580

    Automatically recognizing pain and estimating pain intensity is an emerging research area that has promising applications in the medical and healthcare field, and this task possesses a crucial role in the diagnosis and treatment of patients who have limited ability to communicate verbally and remains a challenge in pattern recognition. Recently, deep learning has achieved impressive results in many domains. However, deep architectures require a significant amount of labeled data for training, and they may fail to outperform conventional handcrafted features due to insufficient data, which is also the problem faced by pain detection. Furthermore, the latest studies show that handcrafted features may provide complementary information to deep-learned features; hence, combining these features may result in improved performance. Motived by the above considerations, in this paper, we propose an innovative method based on the combination of deep spatiotemporal and handcrafted features for pain intensity estimation. We use C3D, a deep 3-dimensional convolutional network that takes a continuous sequence of video frames as input, to extract spatiotemporal facial features. C3D models the appearance and motion of videos simultaneously. For handcrafted features, we propose extracting the geometric information by computing the distance between normalized facial landmarks per frame and the ones of the mean face shape, and we extract the appearance information using the histogram of oriented gradients (HOG) features around normalized facial landmarks per frame. Two levels of SVRs are trained using spatiotemporal, geometric and appearance features to obtain estimation results. We tested our proposed method on the UNBC-McMaster shoulder pain expression archive database and obtained experimental results that outperform the current state-of-the-art.

  • Domain Adaptation Based on Mixture of Latent Words Language Models for Automatic Speech Recognition Open Access

    Ryo MASUMURA  Taichi ASAMI  Takanobu OBA  Hirokazu MASATAKI  Sumitaka SAKAUCHI  Akinori ITO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2018/02/26
      Page(s):
    1581-1590

    This paper proposes a novel domain adaptation method that can utilize out-of-domain text resources and partially domain matched text resources in language modeling. A major problem in domain adaptation is that it is hard to obtain adequate adaptation effects from out-of-domain text resources. To tackle the problem, our idea is to carry out model merger in a latent variable space created from latent words language models (LWLMs). The latent variables in the LWLMs are represented as specific words selected from the observed word space, so LWLMs can share a common latent variable space. It enables us to perform flexible mixture modeling with consideration of the latent variable space. This paper presents two types of mixture modeling, i.e., LWLM mixture models and LWLM cross-mixture models. The LWLM mixture models can perform a latent word space mixture modeling to mitigate domain mismatch problem. Furthermore, in the LWLM cross-mixture models, LMs which individually constructed from partially matched text resources are split into two element models, each of which can be subjected to mixture modeling. For the approaches, this paper also describes methods to optimize mixture weights using a validation data set. Experiments show that the mixture in latent word space can achieve performance improvements for both target domain and out-of-domain compared with that in observed word space.

  • Submodular Based Unsupervised Data Selection

    Aiying ZHANG  Chongjia NI  

     
    PAPER-Speech and Hearing

      Pubricized:
    2018/03/14
      Page(s):
    1591-1604

    Automatic speech recognition (ASR) and keyword search (KWS) have more and more found their way into our everyday lives, and their successes could boil down lots of factors. In these factors, large scale of speech data used for acoustic modeling is the key factor. However, it is difficult and time-consuming to acquire large scale of transcribed speech data for some languages, especially for low-resource languages. Thus, at low-resource condition, it becomes important with which transcribed data for acoustic modeling for improving the performance of ASR and KWS. In view of using acoustic data for acoustic modeling, there are two different ways. One is using the target language data, and another is using large scale of other source languages data for cross-lingual transfer. In this paper, we propose some approaches for efficient selecting acoustic data for acoustic modeling. For target language data, a submodular based unsupervised data selection approach is proposed. The submodular based unsupervised data selection could select more informative and representative utterances for manual transcription for acoustic modeling. For other source languages data, the high misclassified as target language based submodular multilingual data selection approach and knowledge based group multilingual data selection approach are proposed. When using selected multilingual data for multilingual deep neural network training for cross-lingual transfer, it could improve the performance of ASR and KWS of target language. When comparing our proposed multilingual data selection approach with language identification based multilingual data selection approach, our proposed approach also obtains better effect. In this paper, we also analyze and compare the language factor and the acoustic factor influence on the performance of ASR and KWS. The influence of different scale of target language data on the performance of ASR and KWS at mono-lingual condition and cross-lingual condition are also compared and analyzed, and some significant conclusions can be concluded.

  • Deblocking Artifact of Satellite Image Based on Adaptive Soft-Threshold Anisotropic Filter Using Wavelet

    RISNANDAR  Masayoshi ARITSUGI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/02/26
      Page(s):
    1605-1620

    New deblocking artifact, or blocking artifact reduction, algorithms based on nonlinear adaptive soft-threshold anisotropic filter in wavelet are proposed. Our deblocking algorithm uses soft-threshold, adaptive wavelet direction, adaptive anisotropic filter, and estimation. The novelties of this paper are an adaptive soft-threshold for deblocking artifact and an optimal intersection of confidence intervals (OICI) method in deblocking artifact estimation. The soft-threshold values are adaptable to different thresholds of flat area, texture area, and blocking artifact. The OICI is a reconstruction technique of estimated deblocking artifact which improves acceptable quality level of estimated deblocking artifact and reduces execution time of deblocking artifact estimation compared to the other methods. Our adaptive OICI method outperforms other adaptive deblocking artifact methods. Our estimated deblocking artifact algorithms have up to 98% of MSE improvement, up to 89% of RMSE improvement, and up to 99% of MAE improvement. We also got up to 77.98% reduction of computational time of deblocking artifact estimations, compared to other methods. We have estimated shift and add algorithms by using Euler++(E++) and Runge-Kutta of order 4++ (RK4++) algorithms which iterate one step an ordinary differential equation integration method. Experimental results showed that our E++ and RK4++ algorithms could reduce computational time in terms of shift and add, and RK4++ algorithm is superior to E++ algorithm.

  • Image Denoising Using Block-Rotation-Based SVD Filtering in Wavelet Domain

    Min WANG  Shudao ZHOU  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/03/14
      Page(s):
    1621-1628

    This paper proposes an image denoising method using singular value decomposition (SVD) with block-rotation-based operations in wavelet domain. First, we decompose a noisy image to some sub-blocks, and use the single-level discrete 2-D wavelet transform to decompose each sub-block into the low-frequency image part and the high-frequency parts. Then, we use SVD and rotation-based SVD with the rank-1 approximation to filter the noise of the different high-frequency parts, and get the denoised sub-blocks. Finally, we reconstruct the sub-block from the low-frequency part and the filtered the high-frequency parts by the inverse wavelet transform, and reorganize each denoised sub-blocks to obtain the final denoised image. Experiments show the effectiveness of this method, compared with relevant methods.

  • Linear-Time Algorithm in Bayesian Image Denoising based on Gaussian Markov Random Field

    Muneki YASUDA  Junpei WATANABE  Shun KATAOKA  Kazuyuki TANAKA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/03/02
      Page(s):
    1629-1639

    In this paper, we consider Bayesian image denoising based on a Gaussian Markov random field (GMRF) model, for which we propose an new algorithm. Our method can solve Bayesian image denoising problems, including hyperparameter estimation, in O(n)-time, where n is the number of pixels in a given image. From the perspective of the order of the computational time, this is a state-of-the-art algorithm for the present problem setting. Moreover, the results of our numerical experiments we show our method is in fact effective in practice.

  • Co-Propagation with Distributed Seeds for Salient Object Detection

    Yo UMEKI  Taichi YOSHIDA  Masahiro IWAHASHI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/03/09
      Page(s):
    1640-1647

    In this paper, we propose a method of salient object detection based on distributed seeds and a co-propagation of seed information. Salient object detection is a technique which estimates important objects for human by calculating saliency values of pixels. Previous salient object detection methods often produce incorrect saliency values near salient objects in the case of images which have some objects, called the leakage of saliencies. Therefore, a method based on a co-propagation, the scale invariant feature transform, the high dimensional color transform, and machine learning is proposed to reduce the leakage. Firstly, the proposed method estimates regions clearly located in salient objects and the background, which are called as seeds and resultant seeds, are distributed over images. Next, the saliency information of seeds is simultaneously propagated, which is then referred as a co-propagation. The proposed method can reduce the leakage caused because of the above methods when the co-propagation of each information collide with each other near the boundary. Experiments show that the proposed method significantly outperforms the state-of-the-art methods in mean absolute error and F-measure, which perceptually reduces the leakage.

  • Objective Evaluation of Impression of Faces with Various Female Hairstyles Using Field of Visual Perception

    Naoyuki AWANO  Kana MOROHOSHI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/03/22
      Page(s):
    1648-1656

    Most people are concerned about their appearance, and the easiest way to change the appearance is to change the hairstyle. However, except for professional hairstylists, it is difficult to objectively judge which hairstyle suits them. Currently, oval faces are generally said to be the ideal facial shape in terms of suitability to various hairstyles. Meanwhile, field of visual perception (FVP), proposed recently in the field of cognitive science, has attracted attention as a model to represent the visual perception phenomenon. Moreover, a computation model for digital images has been proposed, and it is expected to be used in quantitative evaluation of sensibility and sensitivity called “kansei.” Quantitative evaluation of “goodness of patterns” and “strength of impressions” by evaluating distributions of the field has been reported. However, it is unknown whether the evaluation method can be generalized for use in various subjects, because it has been applied only to some research subjects, such as characters, text, and simple graphics. In this study, for the first time, we apply FVP to facial images with various hairstyles and verify whether it has the potential of evaluating impressions of female faces. Specifically, we verify whether the impressions of facial images that combine various facial shapes and female hairstyles can be represented using FVP. We prepare many combinational images of facial shapes and hairstyles and conduct a psychological experiment to evaluate their impressions. Moreover, we compute the FVP of each image and propose a novel evaluation method by analyzing the distributions. The conventional and proposed evaluation values correlated to the psychological evaluation values after normalization, and demonstrated the effectiveness of the FVP as an image feature quantity to evaluate faces.

  • SOM-Based Vector Recognition with Pre-Grouping Functionality

    Yuto KUROSAKI  Masayoshi OHTA  Hidetaka ITO  Hiroomi HIKAWA  

     
    PAPER-Biocybernetics, Neurocomputing

      Pubricized:
    2018/03/20
      Page(s):
    1657-1665

    This paper discusses the effect of pre-grouping on vector classification based on the self-organizing map (SOM). The SOM is an unsupervised learning neural network, and is used to form clusters of vectors using its topology preserving nature. The use of SOMs for practical applications, however, may pose difficulties in achieving high recognition accuracy. For example, in image recognition, the accuracy is degraded due to the variation of lighting conditions. This paper considers the effect of pre-grouping of feature vectors on such types of applications. The proposed pre-grouping functionality is also based on the SOM and introduced into a new parallel configuration of the previously proposed SOM-Hebb classifers. The overall system is implemented and applied to position identification from images obtained in indoor and outdoor settings. The system first performs the grouping of images according to the rough representation of the brightness profile of images, and then assigns each SOM-Hebb classifier in the parallel configuration to one of the groups. Recognition parameters of each classifier are tuned for the vectors belonging to its group. Comparison between the recognition systems with and without the grouping shows that the grouping can improve recognition accuracy.

  • Hybrid Mechanism to Detect Paroxysmal Stage of Atrial Fibrillation Using Adaptive Threshold-Based Algorithm with Artificial Neural Network

    Mohamad Sabri bin SINAL  Eiji KAMIOKA  

     
    PAPER-Biological Engineering

      Pubricized:
    2018/03/14
      Page(s):
    1666-1676

    Automatic detection of heart cycle abnormalities in a long duration of ECG data is a crucial technique for diagnosing an early stage of heart diseases. Concretely, Paroxysmal stage of Atrial Fibrillation rhythms (ParAF) must be discriminated from Normal Sinus rhythms (NS). The both of waveforms in ECG data are very similar, and thus it is difficult to completely detect the Paroxysmal stage of Atrial Fibrillation rhythms. Previous studies have tried to solve this issue and some of them achieved the discrimination with a high degree of accuracy. However, the accuracies of them do not reach 100%. In addition, no research has achieved it in a long duration, e.g. 12 hours, of ECG data. In this study, a new mechanism to tackle with these issues is proposed: “Door-to-Door” algorithm is introduced to accurately and quickly detect significant peaks of heart cycle in 12 hours of ECG data and to discriminate obvious ParAF rhythms from NS rhythms. In addition, a quantitative method using Artificial Neural Network (ANN), which discriminates unobvious ParAF rhythms from NS rhythms, is investigated. As the result of Door-to-Door algorithm performance evaluation, it was revealed that Door-to-Door algorithm achieves the accuracy of 100% in detecting the significant peaks of heart cycle in 17 NS ECG data. In addition, it was verified that ANN-based method achieves the accuracy of 100% in discriminating the Paroxysmal stage of 15 Atrial Fibrillation data from 17 NS data. Furthermore, it was confirmed that the computational time to perform the proposed mechanism is less than the half of the previous study. From these achievements, it is concluded that the proposed mechanism can practically be used to diagnose early stage of heart diseases.

  • On Maximizing the Lifetime of Wireless Sensor Networks in 3D Vegetation-Covered Fields

    Wenjie YU  Xunbo LI  Zhi ZENG  Xiang LI  Jian LIU  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2018/03/01
      Page(s):
    1677-1681

    In this paper, the problem of lifetime extension of wireless sensor networks (WSNs) with redundant sensor nodes deployed in 3D vegetation-covered fields is modeled, which includes building communication models, network model and energy model. Generally, such a problem cannot be solved by a conventional method directly. Here we propose an Artificial Bee Colony (ABC) based optimal grouping algorithm (ABC-OG) to solve it. The main contribution of the algorithm is to find the optimal number of feasible subsets (FSs) of WSN and assign them to work in rotation. It is verified that reasonably grouping sensors into FSs can average the network energy consumption and prolong the lifetime of the network. In order to further verify the effectiveness of ABC-OG, two other algorithms are included for comparison. The experimental results show that the proposed ABC-OG algorithm provides better optimization performance.

  • BackAssist: Augmenting Mobile Touch Manipulation with Back-of-Device Assistance

    Liang CHEN  Dongyi CHEN  Xiao CHEN  

     
    LETTER-Computer System

      Pubricized:
    2018/03/16
      Page(s):
    1682-1685

    Operations, such as text entry and zooming, are simple and frequently used on mobile touch devices. However, these operations are far from being perfectly supported. In this paper, we present our prototype, BackAssist, which takes advantage of back-of-device input to augment front-of-device touch interaction. Furthermore, we present the results of a user study to evaluate whether users can master the back-of-device control of BackAssist or not. The results show that the back-of-device control can be easily grasped and used by ordinary smart phone users. Finally, we present two BackAssist supported applications - a virtual keyboard application and a map application. Users who tried out the two applications give positive feedback to the BackAssist supported augmentation.

  • Source-Side Detection of DRDoS Attack Request with Traffic-Aware Adaptive Threshold

    Sinh-Ngoc NGUYEN  Van-Quyet NGUYEN  Giang-Truong NGUYEN  JeongNyeo KIM  Kyungbaek KIM  

     
    LETTER-Information Network

      Pubricized:
    2018/03/12
      Page(s):
    1686-1690

    Distributed Reflective Denial of Services (DRDoS) attacks have gained huge popularity and become a major factor in a number of massive cyber-attacks. Usually, the attackers launch this kind of attack with small volume of requests to generate a large volume of attack traffic aiming at the victim by using IP spoofing from legitimate hosts. There have been several approaches, such as static threshold based approach and confirmation-based approach, focusing on DRDoS attack detection at victim's side. However, these approaches have significant disadvantages: (1) they are only passive defences after the attack and (2) it is hard to trace back the attackers. To address this problem, considerable attention has been paid to the study of detecting DRDoS attack at source side. Because the existing proposals following this direction are supposed to be ineffective to deal with small volume of attack traffic, there is still a room for improvement. In this paper, we propose a novel method to detect DRDoS attack request traffic on SDN(Software Defined Network)-enabled gateways in the source side of attack traffic. Our method adjusts the sampling rate and provides a traffic-aware adaptive threshold along with the margin based on analysing observed traffic behind gateways. Experimental results show that the proposed method is a promising solution to detect DRDoS attack request in the source side.

  • Horizontal Partition for Scalable Control in Software-Defined Data Center Networks

    Shaojun ZHANG  Julong LAN  Chao QI  Penghao SUN  

     
    LETTER-Information Network

      Pubricized:
    2018/03/07
      Page(s):
    1691-1693

    Distributed control plane architecture has been employed in software-defined data center networks to improve the scalability of control plane. However, since the flow space is partitioned by assigning switches to different controllers, the network topology is also partitioned and the rule setup process has to invoke multiple controllers. Besides, the control load balancing based on switch migration is heavyweight. In this paper, we propose a lightweight load partition method which decouples the flow space from the network topology. The flow space is partitioned with hosts rather than switches as carriers, which supports fine-grained and lightweight load balancing. Moreover, the switches are no longer needed to be assigned to different controllers and we keep all of them controlled by each controller, thus each flow request can be processed by exactly one controller in a centralized style. Evaluations show that our scheme reduces rule setup costs and achieves lightweight load balancing.

  • Optimizing Non-Uniform Bandwidth Reservation Based on Meter Table of Openflow

    Liaoruo HUANG  Qingguo SHEN  Zhangkai LUO  

     
    LETTER-Information Network

      Pubricized:
    2018/03/14
      Page(s):
    1694-1698

    Bandwidth reservation is an important way to guarantee deterministic end-to-end service quality. However, with the traditional bandwidth reservation mechanism, the allocated bandwidth at each link is by default the same without considering the available resource of each link, which may lead to unbalanced resource utilization and limit the number of user connections that network can accommodate. In this paper, we propose a non-uniform bandwidth reservation method, which can further balance the resource utilization of network by optimizing the reserved bandwidth at each link according to its link load. Furthermore, to implement the proposed method, we devise a flexible and automatic bandwidth reservation mechanism based on meter table of Openflow. Through simulations, it is showed that our method can achieve better load balancing performance and make network accommodate more user connections comparing with the traditional methods in most application scenarios.

  • Hybrid Message Logging Protocol with Little Overhead for Two-Level Hierarchical and Distributed Architectures

    Jinho AHN  

     
    LETTER-Dependable Computing

      Pubricized:
    2018/03/01
      Page(s):
    1699-1702

    In this paper, we present a hybrid message logging protocol consisting of three modules for two-level hierarchical and distributed architectures to address the drawbacks of sender-based message logging. The first module reduces the number of in-group control messages and, the rest, the number of inter-group control messages while localizing recovery. In addition, it can distribute the load of logging and keeping inter-group messages to group members as evenly as possible. The simulation results show the proposed protocol considerably outperforms the traditional protocol in terms of message logging overhead and scalability.

  • Extreme Learning Machine with Superpixel-Guided Composite Kernels for SAR Image Classification

    Dongdong GUAN  Xiaoan TANG  Li WANG  Junda ZHANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2018/03/14
      Page(s):
    1703-1706

    Synthetic aperture radar (SAR) image classification is a popular yet challenging research topic in the field of SAR image interpretation. This paper presents a new classification method based on extreme learning machine (ELM) and the superpixel-guided composite kernels (SGCK). By introducing the generalized likelihood ratio (GLR) similarity, a modified simple linear iterative clustering (SLIC) algorithm is firstly developed to generate superpixel for SAR image. Instead of using a fixed-size region, the shape-adaptive superpixel is used to exploit the spatial information, which is effective to classify the pixels in the detailed and near-edge regions. Following the framework of composite kernels, the SGCK is constructed base on the spatial information and backscatter intensity information. Finally, the SGCK is incorporated an ELM classifier. Experimental results on both simulated SAR image and real SAR image demonstrate that the proposed framework is superior to some traditional classification methods.

  • Cross-Layer Management for Multiple Adaptive Streaming Clients in Wireless Home Networks

    Duc V. NGUYEN  Huyen T. T. TRAN  Nam PHAM NGOC  Truong Cong THANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/02/27
      Page(s):
    1707-1710

    In this letter, we propose a solution for managing multiple adaptive streaming clients running on different devices in a wireless home network. Our solution consists of two main aspects: a manager that determines bandwidth allocated for each client and a client-based throughput control mechanism that regulates the video traffic throughput of each client. The experimental results using a real test-bed show that our solution is able to effectively improve the quality for concurrent streaming clients.

  • Accurate Target Motion Analysis from a Small Measurement Set Using RANSAC

    Hyunhak SHIN  Bonhwa KU  Wooyoung HONG  Hanseok KO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/02/23
      Page(s):
    1711-1714

    Most conventional research on target motion analysis (TMA) based on least squares (LS) has focused on performing asymptotically unbiased estimation with inaccurate measurements. However, such research may often yield inaccurate estimation results when only a small set of measurement data is used. In this paper, we propose an accurate TMA method even with a small set of bearing measurements. First, a subset of measurements is selected by a random sample consensus (RANSAC) algorithm. Then, LS is applied to the selected subset to estimate target motion. Finally, to increase accuracy, the target motion estimation is refined through a bias compensation algorithm. Simulated results verify the effectiveness of the proposed method.

  • Boundary-Aware Superpixel Segmentation Based on Minimum Spanning Tree

    Li XU  Bing LUO  Zheng PEI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/02/23
      Page(s):
    1715-1719

    In this paper, we propose a boundary-aware superpixel segmentation method, which could quickly and exactly extract superpixel with a non-iteration framework. The basic idea is to construct a minimum spanning tree (MST) based on structure edge to measure the local similarity among pixels, and then label each pixel as the index with shortest path seeds. Intuitively, we first construct MST on the original pixels with boundary feature to calculate the similarity of adjacent pixels. Then the geodesic distance between pixels can be exactly obtained based on two-round tree recursions. We determinate pixel label as the shortest path seed index. Experimental results on BSD500 segmentation benchmark demonstrate the proposed method obtains best performance compared with seven state-of-the-art methods. Especially for the low density situation, our method can obtain the boundary-aware oversegmentation region.