The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ALG(2355hit)

261-280hit(2355hit)

  • Computational Complexity and Polynomial Time Procedure of Response Property Problem in Workflow Nets

    Muhammad Syafiq BIN AB MALEK  Mohd Anuaruddin BIN AHMADON  Shingo YAMAGUCHI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Vol:
    E101-D No:6
      Page(s):
    1503-1510

    Response property is a kind of liveness property. Response property problem is defined as follows: Given two activities α and β, whenever α is executed, is β always executed after that? In this paper, we tackled the problem in terms of Workflow Petri nets (WF-nets for short). Our results are (i) the response property problem for acyclic WF-nets is decidable, (ii) the problem is intractable for acyclic asymmetric choice (AC) WF-nets, and (iii) the problem for acyclic bridge-less well-structured WF-nets is solvable in polynomial time. We illustrated the usefulness of the procedure with an application example.

  • Linear-Time Algorithm in Bayesian Image Denoising based on Gaussian Markov Random Field

    Muneki YASUDA  Junpei WATANABE  Shun KATAOKA  Kazuyuki TANAKA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/03/02
      Vol:
    E101-D No:6
      Page(s):
    1629-1639

    In this paper, we consider Bayesian image denoising based on a Gaussian Markov random field (GMRF) model, for which we propose an new algorithm. Our method can solve Bayesian image denoising problems, including hyperparameter estimation, in O(n)-time, where n is the number of pixels in a given image. From the perspective of the order of the computational time, this is a state-of-the-art algorithm for the present problem setting. Moreover, the results of our numerical experiments we show our method is in fact effective in practice.

  • Hybrid Mechanism to Detect Paroxysmal Stage of Atrial Fibrillation Using Adaptive Threshold-Based Algorithm with Artificial Neural Network

    Mohamad Sabri bin SINAL  Eiji KAMIOKA  

     
    PAPER-Biological Engineering

      Pubricized:
    2018/03/14
      Vol:
    E101-D No:6
      Page(s):
    1666-1676

    Automatic detection of heart cycle abnormalities in a long duration of ECG data is a crucial technique for diagnosing an early stage of heart diseases. Concretely, Paroxysmal stage of Atrial Fibrillation rhythms (ParAF) must be discriminated from Normal Sinus rhythms (NS). The both of waveforms in ECG data are very similar, and thus it is difficult to completely detect the Paroxysmal stage of Atrial Fibrillation rhythms. Previous studies have tried to solve this issue and some of them achieved the discrimination with a high degree of accuracy. However, the accuracies of them do not reach 100%. In addition, no research has achieved it in a long duration, e.g. 12 hours, of ECG data. In this study, a new mechanism to tackle with these issues is proposed: “Door-to-Door” algorithm is introduced to accurately and quickly detect significant peaks of heart cycle in 12 hours of ECG data and to discriminate obvious ParAF rhythms from NS rhythms. In addition, a quantitative method using Artificial Neural Network (ANN), which discriminates unobvious ParAF rhythms from NS rhythms, is investigated. As the result of Door-to-Door algorithm performance evaluation, it was revealed that Door-to-Door algorithm achieves the accuracy of 100% in detecting the significant peaks of heart cycle in 17 NS ECG data. In addition, it was verified that ANN-based method achieves the accuracy of 100% in discriminating the Paroxysmal stage of 15 Atrial Fibrillation data from 17 NS data. Furthermore, it was confirmed that the computational time to perform the proposed mechanism is less than the half of the previous study. From these achievements, it is concluded that the proposed mechanism can practically be used to diagnose early stage of heart diseases.

  • Compact CAR: Low-Overhead Cache Replacement Policy for an ICN Router

    Atsushi OOKA  Suyong EUM  Shingo ATA  Masayuki MURATA  

     
    PAPER-Network System

      Pubricized:
    2017/12/18
      Vol:
    E101-B No:6
      Page(s):
    1366-1378

    Information-centric networking (ICN) has gained attention from network research communities due to its capability of efficient content dissemination. In-network caching function in ICN plays an important role to achieve the design motivation. However, many researchers on in-network caching due to its ability to efficiently disseminate content. The in-network caching function in ICN plays an important role in realizing the design goals. However, many in-network caching researchers have focused on where to cache rather than how to cache: the former is known as content deployment in the network and the latter is known as cache replacement in an ICN router. Although the cache replacement has been intensively researched in the context of web-caching and content delivery network previously, networks, the conventional approaches cannot be directly applied to ICN due to the fine granularity of chunks in ICN, which eventually changes the access patterns. In this paper, we argue that ICN requires a novel cache replacement algorithm to fulfill the requirements in the design of a high performance ICN router. Then, we propose a novel cache replacement algorithm to satisfy the requirements named Compact CLOCK with Adaptive Replacement (Compact CAR), which can reduce the consumption of cache memory to one-tenth compared to conventional approaches. In this paper, we argue that ICN requires a novel cache replacement algorithm to fulfill the requirements set for high performance ICN routers. Our solution, Compact CLOCK with Adaptive Replacement (Compact CAR), is a novel cache replacement algorithm that satisfies the requirements. The evaluation result shows that the consumption of cache memory required to achieve a desired performance can be reduced by 90% compared to conventional approaches such as FIFO and CLOCK.

  • Counting Algorithms for Recognizable and Algebraic Series

    Bao Trung CHU  Kenji HASHIMOTO  Hiroyuki SEKI  

     
    PAPER-Formal Approaches

      Pubricized:
    2018/03/16
      Vol:
    E101-D No:6
      Page(s):
    1479-1490

    Formal series are a natural extension of formal languages by associating each word with a value called a coefficient or a weight. Among them, recognizable series and algebraic series can be regarded as extensions of regular languages and context-free languages, respectively. The coefficient of a word w can represent quantities such as the cost taken by an operation on w, the probability that w is emitted. One of the possible applications of formal series is the string counting in quantitative analysis of software. In this paper, we define the counting problems for formal series and propose algorithms for the problems. The membership problem for an automaton or a grammar corresponds to the problem of computing the coefficient of a given word in a given series. Accordingly, we define the counting problem for formal series in the following two ways. For a formal series S and a natural number d, we define CC(S,d) to be the sum of the coefficients of all the words of length d in S and SC(S,d) to be the number of words of length d that have non-zero coefficients in S. We show that for a given recognizable series S and a natural number d, CC(S,d) can be computed in O(η log d) time where η is an upper-bound of time needed for a single state-transition matrix operation, and if the state-transition matrices of S are commutative for multiplication, SC(S,d) can be computed in polynomial time of d. We extend the notions to tree series and discuss how to compute them efficiently. Also, we propose an algorithm that computes CC(S,d) in square time of d for an algebraic series S. We show the CPU time of the proposed algorithm for computing CC(S,d) for some context-free grammars as S, one of which represents the syntax of C language. To examine the applicability of the proposed algorithms to string counting for the vulnerability analysis, we also present results on string counting for Kaluza Benchmark.

  • More New Classes of Differentially 4-Uniform Permutations with Good Cryptographic Properties

    Jie PENG  Chik How TAN  Qichun WANG  Jianhua GAO  Haibin KAN  

     
    PAPER-Cryptography and Information Security

      Vol:
    E101-A No:6
      Page(s):
    945-952

    Research on permutation polynomials over the finite field F22k with significant cryptographical properties such as possibly low differential uniformity, possibly high nonlinearity and algebraic degree has attracted a lot of attention and made considerable progress in recent years. Once used as the substitution boxes (S-boxes) in the block ciphers with Substitution Permutation Network (SPN) structure, this kind of polynomials can have a good performance against the classical cryptographic analysis such as linear attacks, differential attacks and the higher order differential attacks. In this paper we put forward a new construction of differentially 4-uniformity permutations over F22k by modifying the inverse function on some specific subsets of the finite field. Compared with the previous similar works, there are several advantages of our new construction. One is that it can provide a very large number of Carlet-Charpin-Zinoviev equivalent classes of functions (increasing exponentially). Another advantage is that all the functions are explicitly constructed, and the polynomial forms are obtained for three subclasses. The third advantage is that the chosen subsets are very large, hence all the new functions are not close to the inverse function. Therefore, our construction may provide more choices for designing of S-boxes. Moreover, it has been checked by a software programm for k=3 that except for one special function, all the other functions in our construction are Carlet-Charpin-Zinoviev equivalent to the existing ones.

  • Advanced DBS (Direct-Binary Search) Method for Compensating Spatial Chromatic Errors on RGB Digital Holograms in a Wide-Depth Range with Binary Holograms

    Thibault LEPORTIER  Min-Chul PARK  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:5
      Page(s):
    848-849

    Direct-binary search method has been used for converting complex holograms into binary format. However, this algorithm is optimized to reconstruct monochromatic digital holograms and is accurate only in a narrow-depth range. In this paper, we proposed an advanced direct-binary search method to increase the depth of field of 3D scenes reconstructed in RGB by binary holograms.

  • Routing, Modulation Level, Spectrum and Transceiver Assignment in Elastic Optical Networks

    Mingcong YANG  Kai GUO  Yongbing ZHANG  Yusheng JI  

     
    PAPER-Fiber-Optic Transmission for Communications

      Pubricized:
    2017/11/20
      Vol:
    E101-B No:5
      Page(s):
    1197-1209

    The elastic optical network (EON) is a promising new optical technology that uses spectrum resources much more efficiently than does traditional wavelength division multiplexing (WDM). This paper focuses on the routing, modulation level, spectrum and transceiver allocation (RMSTA) problems of the EON. In contrast to previous works that consider only the routing and spectrum allocation (RSA) or routing, modulation level and spectrum allocation (RMSA) problems, we additionally consider the transceiver allocation problem. Because transceivers can be used to regenerate signals (by connecting two transceivers back-to-back) along a transmission path, different regeneration sites on a transmission path result in different spectrum and transceiver usage. Thus, the RMSTA problem is both more complex and more challenging than are the RSA and RMSA problems. To address this problem, we first propose an integer linear programming (ILP) model whose objective is to optimize the balance between spectrum usage and transceiver usage by tuning a weighting coefficient to minimize the cost of network operations. Then, we propose a novel virtual network-based heuristic algorithm to solve the problem and present the results of experiments on representative network topologies. The results verify that, compared to previous works, the proposed algorithm can significantly reduce both resource consumption and time complexity.

  • Grid-Based Parallel Algorithms of Join Queries for Analyzing Multi-Dimensional Data on MapReduce

    Miyoung JANG  Jae-Woo CHANG  

     
    PAPER-Technologies for Knowledge Support Platform

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    964-976

    Recently, the join processing of large-scale datasets in MapReduce environments has become an important issue. However, the existing MapReduce-based join algorithms suffer from too much overhead for constructing and updating the data index. Moreover, the similarity computation cost is high because the existing algorithms partition data without considering the data distribution. In this paper, we propose two grid-based join algorithms for MapReduce. First, we propose a similarity join algorithm that evenly distributes join candidates using a dynamic grid index, which partitions data considering data density and similarity threshold. We use a bottom-up approach by merging initial grid cells into partitions and assigning them to MapReduce jobs. Second, we propose a k-NN join query processing algorithm for MapReduce. To reduce the data transmission cost, we determine an optimal grid cell size by considering the data distribution of randomly selected samples. Then, we perform kNN join by assigning the only related join data to a reducer. From performance analysis, we show that our similarity join query processing algorithm and our k-NN join algorithm outperform existing algorithms by up to 10 times, in terms of query processing time.

  • Highly Efficient Adaptive Bandwidth Allocation Algorithm for WDM/OFDM-PON-Based Elastic Optical Access Networks

    Hiroyuki SAITO  Naoki MINATO  Hideaki TAMAI  Hironori SASAKI  

     
    PAPER

      Pubricized:
    2017/10/18
      Vol:
    E101-B No:4
      Page(s):
    972-978

    Capital expenditure (CAPEX) reduction and efficient wavelength allocation are critical for the future access networks. Elastic lambda aggregation network (EλAN) based on WDM and OFDM technologies is expected to realize efficient wavelength allocation. In this paper, we propose adaptive bandwidth allocation (ABA) algorithm for EλAN under the conditions of crowded networks, in which modulation format, symbol rate and the number of sub-carriers are adaptively decided based on the distance of PON-section, QoS and bandwidth demand of each ONU. Network simulation results show that the proposed algorithm can effectively reduce the total bandwidth and achieve steady high spectrum efficiency and contribute to the further reduction of CAPEX of future optical access networks.

  • Energy-Efficient Resource Management in Mobile Cloud Computing

    Xiaomin JIN  Yuanan LIU  Wenhao FAN  Fan WU  Bihua TANG  

     
    PAPER-Energy in Electronics Communications

      Pubricized:
    2017/10/16
      Vol:
    E101-B No:4
      Page(s):
    1010-1020

    Mobile cloud computing (MCC) has been proposed as a new approach to enhance mobile device performance via computation offloading. The growth in cloud computing energy consumption is placing pressure on both the environment and cloud operators. In this paper, we focus on energy-efficient resource management in MCC and aim to reduce cloud operators' energy consumption through resource management. We establish a deterministic resource management model by solving a combinatorial optimization problem with constraints. To obtain the resource management strategy in deterministic scenarios, we propose a deterministic strategy algorithm based on the adaptive group genetic algorithm (AGGA). Wireless networks are used to connect to the cloud in MCC, which causes uncertainty in resource management in MCC. Based on the deterministic model, we establish a stochastic model that involves a stochastic optimization problem with chance constraints. To solve this problem, we propose a stochastic strategy algorithm based on Monte Carlo simulation and AGGA. Experiments show that our deterministic strategy algorithm obtains approximate optimal solutions with low algorithmic complexity with respect to the problem size, and our stochastic strategy algorithm saves more energy than other algorithms while satisfying the chance constraints.

  • A Heuristic for Constructing Smaller Automata Based on Suffix Sorting and Its Application in Network Security

    Inbok LEE  Victor C. VALGENTI  Min S. KIM  Sung-il OH  

     
    LETTER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    613-615

    In this paper we show a simple heuristic for constructing smaller automata for a set of regular expressions, based on suffix sorting: finding common prefixes and suffixes in regular expressions and merging them. It is an important problem in network security. We applied our approach to random and real-world regular expressions. Experimental results showed that our approach yields up to 12 times enhancement in throughput.

  • Blind Source Separation and Equalization Based on Support Vector Regression for MIMO Systems

    Chao SUN  Ling YANG  Juan DU  Fenggang SUN  Li CHEN  Haipeng XI  Shenglei DU  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2017/08/28
      Vol:
    E101-B No:3
      Page(s):
    698-708

    In this paper, we first propose two batch blind source separation and equalization algorithms based on support vector regression (SVR) for linear time-invariant multiple input multiple output (MIMO) systems. The proposed algorithms combine the conventional cost function of SVR with error functions of classical on-line algorithm for blind equalization: both error functions of constant modulus algorithm (CMA) and radius directed algorithm (RDA) are contained in the penalty term of SVR. To recover all sources simultaneously, the cross-correlations of equalizer outputs are included in the cost functions. Simulation experiments show that the proposed algorithms can recover all sources successfully and compensate channel distortion simultaneously. With the use of iterative re-weighted least square (IRWLS) solution of SVR, the proposed algorithms exhibit low computational complexity. Compared with traditional algorithms, the new algorithms only require fewer samples to achieve convergence and perform a lower residual interference. For multilevel signals, the single algorithms based on constant modulus property usually show a relatively high residual error, then we propose two dual-mode blind source separation and equalization schemes. Between them, the dual-mode scheme based on SVR merely requires fewer samples to achieve convergence and further reduces the residual interference.

  • Polynomial-Space Exact Algorithms for the Bipartite Traveling Salesman Problem

    Mohd SHAHRIZAN OTHMAN  Aleksandar SHURBEVSKI  Hiroshi NAGAMOCHI  

     
    LETTER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    611-612

    Given an edge-weighted bipartite digraph G=(A,B;E), the Bipartite Traveling Salesman Problem (BTSP) asks to find the minimum cost of a Hamiltonian cycle of G, or determine that none exists. When |A|=|B|=n, the BTSP can be solved using polynomial space in O*(42nnlog n) time by using the divide-and-conquer algorithm of Gurevich and Shelah (SIAM Journal of Computation, 16(3), pp.486-502, 1987). We adapt their algorithm for the bipartite case, and show an improved time bound of O*(42n), saving the nlog n factor.

  • Polynomial Time Learnability of Graph Pattern Languages Defined by Cographs

    Takayoshi SHOUDAI  Yuta YOSHIMURA  Yusuke SUZUKI  Tomoyuki UCHIDA  Tetsuhiro MIYAHARA  

     
    PAPER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    582-592

    A cograph (complement reducible graph) is a graph which can be generated by disjoint union and complement operations on graphs, starting with a single vertex graph. Cographs arise in many areas of computer science and are studied extensively. With the goal of developing an effective data mining method for graph structured data, in this paper we introduce a graph pattern expression, called a cograph pattern, which is a special type of cograph having structured variables. Firstly, we show that a problem whether or not a given cograph pattern g matches a given cograph G is NP-complete. From this result, we consider the polynomial time learnability of cograph pattern languages defined by cograph patterns having variables labeled with mutually different labels, called linear cograph patterns. Secondly, we present a polynomial time matching algorithm for linear cograph patterns. Next, we give a polynomial time algorithm for obtaining a minimally generalized linear cograph pattern which explains given positive data. Finally, we show that the class of linear cograph pattern languages is polynomial time inductively inferable from positive data.

  • On the Properties and Applications of Inconsistent Neighborhood in Neighborhood Rough Set Models

    Shujiao LIAO  Qingxin ZHU  Rui LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/20
      Vol:
    E101-D No:3
      Page(s):
    709-718

    Rough set theory is an important branch of data mining and granular computing, among which neighborhood rough set is presented to deal with numerical data and hybrid data. In this paper, we propose a new concept called inconsistent neighborhood, which extracts inconsistent objects from a traditional neighborhood. Firstly, a series of interesting properties are obtained for inconsistent neighborhoods. Specially, some properties generate new solutions to compute the quantities in neighborhood rough set. Then, a fast forward attribute reduction algorithm is proposed by applying the obtained properties. Experiments undertaken on twelve UCI datasets show that the proposed algorithm can get the same attribute reduction results as the existing algorithms in neighborhood rough set domain, and it runs much faster than the existing ones. This validates that employing inconsistent neighborhoods is advantageous in the applications of neighborhood rough set. The study would provide a new insight into neighborhood rough set theory.

  • Stochastic Divergence Minimization for Biterm Topic Models

    Zhenghang CUI  Issei SATO  Masashi SUGIYAMA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/12/20
      Vol:
    E101-D No:3
      Page(s):
    668-677

    As the emergence and the thriving development of social networks, a huge number of short texts are accumulated and need to be processed. Inferring latent topics of collected short texts is an essential task for understanding its hidden structure and predicting new contents. A biterm topic model (BTM) was recently proposed for short texts to overcome the sparseness of document-level word co-occurrences by directly modeling the generation process of word pairs. Stochastic inference algorithms based on collapsed Gibbs sampling (CGS) and collapsed variational inference have been proposed for BTM. However, they either require large computational complexity, or rely on very crude estimation that does not preserve sufficient statistics. In this work, we develop a stochastic divergence minimization (SDM) inference algorithm for BTM to achieve better predictive likelihood in a scalable way. Experiments show that SDM-BTM trained by 30% data outperforms the best existing algorithm trained by full data.

  • PTS-Based PAPR Reduction by Iterative p-Norm Minimization without Side Information in OFDM Systems

    Moeko YOSHIDA  Hiromichi NASHIMOTO  Teruyuki MIYAJIMA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/08/24
      Vol:
    E101-B No:3
      Page(s):
    856-864

    This paper proposes a partial transmit sequences (PTS)-based PAPR reduction method and a phase factor estimation method without side information for OFDM systems with QPSK and 16QAM modulation. In the transmitter, an iterative algorithm that minimizes the p-norm of a transmitted signal determines phase factors to reduce PAPR. Unlike conventional methods, the phase factors are allowed to take continuous values in a limited range. In the receiver, the phase factor is blindly estimated by evaluating the phase differences between the equalizer's output and its closest constellation points. Simulation results show that the proposed PAPR reduction method is more computationally efficient than the conventional PTS. Moreover, the combined use of the two proposed methods achieves a satisfactory tradeoff between PAPR and BER by limiting the phase factors properly.

  • Approximate Frequent Pattern Discovery in Compressed Space

    Shouhei FUKUNAGA  Yoshimasa TAKABATAKE  Tomohiro I  Hiroshi SAKAMOTO  

     
    PAPER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    593-601

    A grammar compression is a restricted context-free grammar (CFG) that derives a single string deterministically. The goal of a grammar compression algorithm is to develop a smaller CFG by finding and removing duplicate patterns, which is simply a frequent pattern discovery process. Any frequent pattern can be obtained in linear time; however, a huge working space is required for longer patterns, and the entire string must be preloaded into memory. We propose an online algorithm to address this problem approximately within compressed space. For an input sequence of symbols, a1,a2,..., let Gi be a grammar compression for the string a1a2…ai. In this study, an online algorithm is considered one that can compute Gi+1 from (Gi,ai+1) without explicitly decompressing Gi. Here, let G be a grammar compression for string S. We say that variable X approximates a substring P of S within approximation ratio δ iff for any interval [i,j] with P=S[i,j], the parse tree of G has a node labeled with X that derives S[l,r] for a subinterval [l,r] of [i,j] satisfying |[l,r]|≥δ|[i,j]|. Then, G solves the frequent pattern discovery problem approximately within δ iff for any frequent pattern P of S, there exists a variable that approximates P within δ. Here, δ is called the approximation ratio of G for S. Previously, the best approximation ratio obtained by a polynomial time algorithm was Ω(1/lg2|P|). The main contribution of this work is to present a new lower bound Ω(1/<*|S|lg|P|) that is smaller than the previous bound when lg*|S|

  • Reduction of Constraints from Multipartition to Bipartition in Augmenting Edge-Connectivity of a Graph by One

    Satoshi TAOKA  Tadachika OKI  Toshiya MASHIMA  Toshimasa WATANABE  

     
    PAPER

      Vol:
    E101-A No:2
      Page(s):
    357-366

    The k-edge-connectivity augmentation problem with multipartition constraints (kECAMP, for short) is defined by “Given a multigraph G=(V,E) and a multipartition π={V1,...,Vr} (r≥2) of V, that is, $V = igcup_{h = 1}^r V_h$ and Vi∩Vj=∅ (1≤i

261-280hit(2355hit)