The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] algorithm(2137hit)

521-540hit(2137hit)

  • Transform Domain Shadow Removal for Foreground Silhouette

    Toshiaki SHIOTA  Kazuki NAKAGAMI  Takao NISHITANI  

     
    PAPER-Digital Signal Processing

      Vol:
    E96-A No:3
      Page(s):
    667-674

    A novel shadow removal approach is proposed by using block-wise transform domain shadow detection. The approach is based on the fact that the spatial frequency distributions on normal background areas and those under casted shadows from foreground objects are the same. The proposed approach is especially useful for silhouette extraction by using the Gaussian Mixture background Model (GMM) foreground segmentation in the transform domain, because the frequency distribution has already been calculated in the foreground segmentation. The stable shadow removal is realized, due to the transform domain implementation.

  • Two Heuristic Algorithms for the Minimum Initial Marking Problem of Timed Petri Nets

    Satoru OCHIIWA  Satoshi TAOKA  Masahiro YAMAUCHI  Toshimasa WATANABE  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E96-A No:2
      Page(s):
    540-553

    A timed Petri net, an extended model of an ordinary Petri net with introduction of discrete time delay in firing activity, is practically useful in performance evaluation of real-time systems and so on. Unfortunately though, it is often too difficult to solve (efficiently) even most basic problems in timed Petri net theory. This motivates us to do research on analyzing complexity of Petri net problems and on designing efficient and/or heuristic algorithms. The minimum initial marking problem of timed Petri nets (TPMIM) is defined as follows: “Given a timed Petri net, a firing count vector X and a nonnegative integer π, find a minimum initial marking (an initial marking with the minimum total token number) among those initial ones M each of which satisfies that there is a firing scheduling which is legal on M with respect to X and whose completion time is no more than π, and, if any, find such a firing scheduling.” In a production system like factory automation, economical distribution of initial resources, from which a schedule of job-processings is executable, can be formulated as TPMIM. The subject of the paper is to propose two pseudo-polynomial time algorithms TPM and TMDLO for TPMIM, and to evaluate them by means of computer experiment. Each of the two algorithms finds an initial marking and a firing sequence by means of algorithms for MIM (the initial marking problem for non-timed Petri nets), and then converts it to a firing scheduling of a given timed Petri net. It is shown through our computer experiments that TPM has highest capability among our implemented algorithms including TPM and TMDLO.

  • A Theoretical Framework for Constructing Matching Algorithms Secure against Wolf Attack

    Manabu INUMA  Akira OTSUKA  Hideki IMAI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E96-D No:2
      Page(s):
    357-364

    The security of biometric authentication systems against impersonation attack is usually evaluated by the false accept rate, FAR. The false accept rate FAR is a metric for zero-effort impersonation attack assuming that the attacker attempts to impersonate a user by presenting his own biometric sample to the system. However, when the attacker has some information about algorithms in the biometric authentication system, he might be able to find a “strange” sample (called a wolf) which shows high similarity to many templates and attempt to impersonate a user by presenting a wolf. Une, Otsuka, Imai [22],[23] formulated such a stronger impersonation attack (called it wolf attack), defined a new security metric (called wolf attack probability, WAP), and showed that WAP is extremely higher than FAR in a fingerprint-minutiae matching algorithm proposed by Ratha et al. [19] and in a finger-vein-patterns matching algorithm proposed by Miura et al. [15]. Previously, we constructed secure matching algorithms based on a feature-dependent threshold approach [8] and showed that if the score distribution is perfectly estimated for each input feature data, then the proposed algorithms can lower WAP to a small value almost the same as FAR. In this paper, in addition to reintroducing the results of our previous work [8], we show that the proposed matching algorithm can keep the false reject rate (FRR) low enough without degrading security, if the score distribution is normal for each feature data.

  • A Relocation Planning Method for Railway Cars in Final Assembly Shop

    Yoichi NAGAO  Shinichi NAKANO  Akifumi HOSHINO  Yasushi KANETA  Toshiyuki KITA  Masakazu OKAMOTO  

     
    PAPER-Graphs and Networks

      Vol:
    E96-A No:2
      Page(s):
    554-561

    The authors propose a method to make a movement plan for relocation of the railway cars in preparation for the final assembly. It obtains solution through three steps. The first step is to extract the order constraints between the movements of the railway cars based on their locations before and after relocation. The second step is to introduce the movement which puts a railway car into another location temporarily, in order to avoid a deadlock in the movements. And the final step is to obtain the movement order for carrying out the relocation in the shortest time in accordance with the calculated order constraints by using the genetic algorithm (GA). The order constraints are resolved in advance and therefore the movement order can easily be decided by GA. As the result, the developed system takes time shorter than an expert for planning the relocation.

  • A Greedy Genetic Algorithm for the TDMA Broadcast Scheduling Problem

    Chih-Chiang LIN  Pi-Chung WANG  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E96-D No:1
      Page(s):
    102-110

    The broadcast scheduling problem (BSP) in wireless ad-hoc networks is a well-known NP-complete combinatorial optimization problem. The BSP aims at finding a transmission schedule whose time slots are collision free in a wireless ad-hoc network with time-division multiple access (TDMA). The transmission schedule is optimized for minimizing the frame length of the node transmissions and maximizing the utilization of the shared channel. Recently, many metaheuristics can optimally solve smaller problem instances of the BSP. However, for complex problem instances, the computation of metaheuristics can be quite time and memory consuming. In this work, we propose a greedy genetic algorithm for solving the BSP with a large number of nodes. We present three heuristic genetic operators, including a greedy crossover and two greedy mutation operators, to optimize both objectives of the BSP. These heuristic genetic operators can generate good solutions. Our experiments use both benchmark data sets and randomly generated problem instances. The experimental results show that our genetic algorithm is effective in solving the BSP problem instances of large-scale networks with 2,500 nodes.

  • Dependency Chart Parsing Algorithm Based on Ternary-Span Combination

    Meixun JIN  Yong-Hun LEE  Jong-Hyeok LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E96-D No:1
      Page(s):
    93-101

    This paper presents a new span-based dependency chart parsing algorithm that models the relations between the left and right dependents of a head. Such relations cannot be modeled in existing span-based algorithms, despite their popularity in dependency corpora. We address this problem through ternary-span combination during the subtree derivation. By modeling the relations between the left and right dependents of a head, our proposed algorithm provides a better capability of coordination disambiguation when the conjunction is annotated as the head of the left and right conjuncts. This eventually leads to state-of-the-art performance of dependency parsing on the Chinese data of the CoNLL shared task.

  • A Max-Min Approach to Channel Shortening in OFDM Systems

    Tsukasa TAKAHASHI  Teruyuki MIYAJIMA  

     
    LETTER

      Vol:
    E96-A No:1
      Page(s):
    293-295

    In OFDM systems, residual inter-block interference can be suppressed by a time-domain equalizer that blindly shortens the effective length of a channel impulse response. To further improve the performance of blind equalizers, we propose a channel shortening method that attempts to maximize the minimum FFT output power over data subcarriers. Simulation results indicate that the max-min strategy has performance improvement over a conventional channel shortening method.

  • Adaptive Limited Dynamic Bandwidth Allocation Scheme to Improve Bandwidth Sharing Efficiency in Hybrid PON Combining FTTH and Wireless Sensor Networks

    Monir HOSSEN  Masanori HANAWA  

     
    PAPER-Network

      Vol:
    E96-B No:1
      Page(s):
    127-134

    This paper proposes a dynamic bandwidth allocation algorithm that improves the network performance and bandwidth sharing efficiency in the upstream channels of a hybrid passive optical network (PON) that combines a fiber-to-the-home (FTTH) access network and wireless sensor networks (WSNs). The algorithm is called the adaptive limited dynamic bandwidth allocation (ALDBA) algorithm. Unlike existing algorithms, the ALDBA algorithm is not limited to controlling just FTTH access networks, it also supports WSNs. For the proposed algorithm, we investigate the difference in the lengths of generated data packets between the FTTH terminals and sensor nodes of WSN to effectively evaluate the end-to-end average packet delay, bandwidth utilization, time jitter, and upstream efficiency. Two variants of the proposed algorithm and a limited service (LS) scheme, which is an existing well-known algorithm, are compared under non-uniform traffic conditions without taking into consideration priority scheduling. We demonstrate the proposed scheme through simulation by generating a realistic network traffic model, called self-similar network traffic. We conducted a detailed simulation using several performance parameters to validate the effectiveness of the proposed scheme. The results of the simulation showed that both ALDBA variants outperformed the existing LS scheme in terms of average packet delay, bandwidth utilization, jitter, and upstream efficiency for both low and high traffic loads.

  • Modeling and Algorithms for QoS-Aware Service Composition in Virtualization-Based Cloud Computing

    Jun HUANG  Yanbing LIU  Ruozhou YU  Qiang DUAN  Yoshiaki TANAKA  

     
    PAPER

      Vol:
    E96-B No:1
      Page(s):
    10-19

    Cloud computing is an emerging computing paradigm that may have a significant impact on various aspects of the development of information infrastructure. In a Cloud environment, different types of network resources need to be virtualized as a series of service components by network virtualization, and these service components should be further composed into Cloud services provided to end users. Therefore Quality of Service (QoS) aware service composition plays a crucial role in Cloud service provisioning. This paper addresses the problem on how to compose a sequence of service components for QoS guaranteed service provisioning in a virtualization-based Cloud computing environment. The contributions of this paper include a system model for Cloud service provisioning and two approximation algorithms for QoS-aware service composition. Specifically, a system model is first developed to characterize service provisioning behavior in virtualization-based Cloud computing, then a novel approximation algorithm and a variant of a well-known QoS routing procedure are presented to resolve QoS-aware service composition. Theoretical analysis shows that these two algorithms have the same level of time complexity. Comparison study conducted based on simulation experiments indicates that the proposed novel algorithm achieves better performance in time efficiency and scalability without compromising quality of solution. The modeling technique and algorithms developed in this paper are general and effective; thus are applicable to practical Cloud computing systems.

  • An EM Algorithm-Based Disintegrated Channel Estimator for OFDM AF Cooperative Relaying

    Jeng-Shin SHEU  Wern-Ho SHEEN  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E96-B No:1
      Page(s):
    254-262

    The cooperative orthogonal frequency-division multiplexing (OFDM) relaying system is widely regarded as a key design for future broadband mobile cellular systems. This paper focuses on channel estimation in such a system that uses amplify-and-forward (AF) as the relaying strategy. In the cooperative AF relaying, the destination requires the individual (disintegrated) channel state information (CSI) of the source-relay (S-R) and relay-destination (R-D) links for optimum combination of the signals received from source and relay. Traditionally, the disintegrated CSIs are obtained with two channel estimators: one at the relay and the other at the destination. That is, the CSI of the S-R link is estimated at relay and passed to destination, and the CSI of the R-D link is estimated at destination with the help of pilot symbols transmitted by relay. In this paper, a new disintegrated channel estimator is proposed; based on an expectation-maximization (EM) algorithm, the disintegrated CSIs can be estimated solely by the estimator at destination. Therefore, the new method requires neither signaling overhead for passing the CSI of the S-R link to destination nor pilot symbols for the estimation of the R-D link. Computer simulations show that the proposed estimator works well under the signal-to-noise ratios of interest.

  • Yield-Driven Clock Skew Scheduling for Arbitrary Distributions of Critical Path Delays

    Yanling ZHI  Wai-Shing LUK  Yi WANG  Changhao YAN  Xuan ZENG  

     
    PAPER-Physical Level Design

      Vol:
    E95-A No:12
      Page(s):
    2172-2181

    Yield-driven clock skew scheduling was previously formulated as a minimum cost-to-time ratio cycle problem, by assuming that variational path delays are in Gaussian distributions. However in today's nanometer technology, process variations show growing impacts on this assumption, as variational delays with non-Gaussian distributions have been observed on these paths. In this paper, we propose a novel yield-driven clock skew scheduling method for arbitrary distributions of critical path delays. Firstly, a general problem formulation is proposed. By integrating the cumulative distribution function (CDF) of critical path delays, the formulation is able to handle path delays with any distributions. It also generalizes the previous formulations on yield-driven clock skew scheduling and indicates their statistical interpretations. Generalized Howard algorithm is derived for finding the critical cycles of the underlying timing constraint graphs. Moreover, an effective algorithm based on minimum balancing is proposed for the overall yield improvement. Experimental results on ISCAS89 benchmarks show that, compared with two representative existing methods, our method remarkably improves the yield by 10.25% on average (up to 14.66%).

  • Blocked United Algorithm for the All-Pairs Shortest Paths Problem on Hybrid CPU-GPU Systems

    Kazuya MATSUMOTO  Naohito NAKASATO  Stanislav G. SEDUKHIN  

     
    PAPER-Parallel and Distributed Computing

      Vol:
    E95-D No:12
      Page(s):
    2759-2768

    This paper presents a blocked united algorithm for the all-pairs shortest paths (APSP) problem. This algorithm simultaneously computes both the shortest-path distance matrix and the shortest-path construction matrix for a graph. It is designed for a high-speed APSP solution on hybrid CPU-GPU systems. In our implementation, two most compute intensive parts of the algorithm are performed on the GPU. The first part is to solve the APSP sub-problem for a block of sub-matrices, and the other part is a matrix-matrix “multiplication” for the APSP problem. Moreover, the amount of data communication between CPU (host) memory and GPU memory is reduced by reusing blocks once sent to the GPU. When a problem size (the number of vertices in a graph) is large enough compared to a block size, our implementation of the blocked algorithm requires CPU GPU exchanging of three blocks during a block computation on the GPU. We measured the performance of the algorithm implementation on two different CPU-GPU systems. A system containing an Intel Sandy Bridge CPU (Core i7 2600K) and an AMD Cayman GPU (Radeon HD 6970) achieves the performance up to 1.1 TFlop/s in a single precision.

  • Asymptotically Optimal Merging on ManyCore GPUs

    Arne KUTZNER  Pok-Son KIM  Won-Kwang PARK  

     
    PAPER-Parallel and Distributed Computing

      Vol:
    E95-D No:12
      Page(s):
    2769-2777

    We propose a family of algorithms for efficiently merging on contemporary GPUs, so that each algorithm requires O(m log (+1)) element comparisons, where m and n are the sizes of the input sequences with m ≤ n. According to the lower bounds for merging all proposed algorithms are asymptotically optimal regarding the number of necessary comparisons. First we introduce a parallely structured algorithm that splits a merging problem of size 2l into 2i subproblems of size 2l-i, for some arbitrary i with (0 ≤ i ≤ l). This algorithm represents a merger for i=l but it is rather inefficient in this case. The efficiency is boosted by moving to a two stage approach where the splitting process stops at some predetermined level and transfers control to several parallely operating block-mergers. We formally prove the asymptotic optimality of the splitting process and show that for symmetrically sized inputs our approach delivers up to 4 times faster runtimes than the thrust::merge function that is part of the Thrust library. For assessing the value of our merging technique in the context of sorting we construct and evaluate a MergeSort on top of it. In the context of our benchmarking the resulting MergeSort clearly outperforms the MergeSort implementation provided by the Thrust library as well as Cederman's GPU optimized variant of QuickSort.

  • A Design of Genetically Optimized Linguistic Models

    Keun-Chang KWAK  

     
    LETTER-Biocybernetics, Neurocomputing

      Vol:
    E95-D No:12
      Page(s):
    3117-3120

    In this paper, we propose a method for designing genetically optimized Linguistic Models (LM) with the aid of fuzzy granulation. The fundamental idea of LM introduced by Pedrycz is followed and their design framework based on Genetic Algorithm (GA) is enhanced. A LM is designed by the use of information granulation realized via Context-based Fuzzy C-Means (CFCM) clustering. This clustering technique builds information granules represented as a fuzzy set. However, it is difficult to optimize the number of linguistic contexts, the number of clusters generated by each context, and the weighting exponent. Thus, we perform simultaneous optimization of design parameters linking information granules in the input and output spaces based on GA. Experiments on the coagulant dosing process in a water purification plant reveal that the proposed method shows better performance than the previous works and LM itself.

  • Automatic Parameter Adjustment Method for Audio Equalizer Employing Interactive Genetic Algorithm

    Yuki MISHIMA  Yoshinobu KAJIKAWA  

     
    LETTER-Engineering Acoustics

      Vol:
    E95-A No:11
      Page(s):
    2036-2040

    In this paper, we propose an automatic parameter adjustment method for audio equalizers using an interactive genetic algorithm (IGA). It is very difficult for ordinary users who are not familiar with audio devices to appropriately adjust the parameters of audio equalizers. We therefore propose a system that can automatically adjust the parameters of audio equalizers on the basis of user's evaluation of the reproduced sound. The proposed system utilizes an IGA to adjust the gains and Q values of the peaking filters included in audio equalizers. Listening test results demonstrate that the proposed system can appropriately adjust the parameters on the basis of the user's evaluation.

  • Low-Complexity GSVD-Based Beamforming and Power Allocation for a Cognitive Radio Network

    Jaehyun PARK  Yunju PARK  Sunghyun HWANG  Byung Jang JEONG  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E95-B No:11
      Page(s):
    3536-3544

    In this paper, low-complexity generalized singular value decomposition (GSVD) based beamforming schemes are proposed for a cognitive radio (CR) network in which multiple secondary users (SUs) with multiple antennas coexist with multiple primary users (PUs). In general, optimal beamforming, which suppresses the interference caused at PUs to below a certain threshold and maximizes the signal-to-interference-plus-noise ratios (SINRs) of multiple SUs simultaneously, requires a complicated iterative optimization process. To overcome the computational complexity, we introduce a signal-to-leakage-plus-noise ratio (SLNR) maximizing beamforming scheme in which the weight can be obtained by using the GSVD algorithm, and does not require any iterations or matrix squaring operations. Here, to satisfy the leakage constraints at PUs, two linear methods, zero forcing (ZF) preprocessing and power allocation, are proposed.

  • Normalization Method of Gradient Vector in Frequency Domain Steepest Descent Type Adaptive Algorithm

    Yusuke KUWAHARA  Yusuke IWAMATSU  Kensaku FUJII  Mitsuji MUNEYASU  Masakazu MORIMOTO  

     
    LETTER-Digital Signal Processing

      Vol:
    E95-A No:11
      Page(s):
    2041-2045

    In this paper, we propose a normalization method dividing the gradient vector by the sum of the diagonal and two adjoining elements of the matrix expressing the correlation between the components of the discrete Fourier transform (DFT) of the reference signal used for the identification of unknown system. The proposed method can thereby improve the estimation speed of coefficients of adaptive filter.

  • Low PAPR Precoding Design with Dynamic Channel Assignment for SCBT Communication Systems

    Juinn-Horng DENG  Sheng-Yang HUANG  

     
    LETTER-Transmission Systems and Transmission Equipment for Communications

      Vol:
    E95-B No:11
      Page(s):
    3580-3584

    The single carrier block transmission (SCBT) system has become one of the most popular modulation systems because of its low peak to average power ratio (PAPR). This work proposes precoding design on the transmitter side to retain low PAPR, improve performance, and reduce computational complexity on the receiver side. The system is designed according to the following procedure. First, upper-triangular dirty paper coding (UDPC) is utilized to pre-cancel the interference among multiple streams and provide a one-tap time-domain equalizer for the SCBT system. Next, to solve the problem of the high PAPR of the UDPC precoding system, Tomlinson-Harashima precoding (THP) is developed. Finally, since the UDPC-THP system is degraded by the deep fading channels, the dynamic channel on/off assignment by the maximum capacity algorithm (MCA) and minimum BER algorithm (MBA) is proposed to enhance the bit error rate (BER) performance. Simulation results reveal that the proposed precoding transceiver can provide excellent BER and low PAPR performances for the SCBT system over a multipath fading channel.

  • An Adaptive Comb Filter with Flexible Notch Gain

    Yosuke SUGIURA  Arata KAWAMURA  Youji IIGUNI  

     
    LETTER-Digital Signal Processing

      Vol:
    E95-A No:11
      Page(s):
    2046-2048

    This paper proposes an adaptive comb filter with flexible notch gain. It can appropriately remove a periodic noise from an observed signal. The proposed adaptive comb filter uses a simple LMS algorithm to update the notch gain coefficient for removing the noise and preserving a desired signal, simultaneously. Simulation results show the effectiveness of the proposed comb filter.

  • Route Computation Method for Secure Delivery of Secret Shared Content

    Nagao OGINO  Takuya OMI  Hajime NAKAMURA  

     
    PAPER-Network

      Vol:
    E95-B No:11
      Page(s):
    3456-3463

    Secret sharing schemes have been proposed to protect content by dividing it into many pieces securely and distributing them over different locations. Secret sharing schemes can also be used for the secure delivery of content. The original content cannot be reconstructed by the attacker if the attacker cannot eavesdrop on all the pieces delivered from multiple content servers. This paper aims to obtain secure delivery routes for the pieces, which minimizes the probability that all the pieces can be stolen on the links composing the delivery routes. Although such a route optimization problem can be formulated using an ILP (Integer Linear Programming) model, optimum route computation based on the ILP model requires large amounts of computational resources. Thus, this paper proposes a lightweight route computation method for obtaining suboptimum delivery routes that achieve a sufficiently small probability of all the pieces being stolen. The proposed method computes the delivery routes successively by using the conventional shortest route algorithm repeatedly. The distance of the links accommodating the routes that have already been calculated is adjusted iteratively and utilized for calculation of the new shortest route. The results of a performance evaluation clarify that sufficiently optimum routes can be computed instantly even in practical large-scale networks by the proposed method, which adjusts the link distance strictly based on the risk level at the considered link.

521-540hit(2137hit)