The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] computation(490hit)

341-360hit(490hit)

  • Distributed Evolutionary Digital Filters for IIR Adaptive Digital Filters

    Masahide ABE  Masayuki KAWAMATA  

     
    PAPER-Adaptive Signal Processing

      Vol:
    E84-A No:8
      Page(s):
    1848-1855

    This paper proposes distributed evolutionary digital filters (EDFs) as an improved version of the original EDF. The EDF is an adaptive digital filter which is controlled by adaptive algorithm based on evolutionary computation. In the proposed method, a large population of the original EDF is divided into smaller subpopulations. Each sub-EDF has one subpopulation and executes the small-sized main loop of the original EDF. In addition, the distributed algorithm periodically selects promising individuals from each subpopulation. Then, they migrate to different subpopulations. Numerical examples show that the distributed EDF has a higher convergence rate and smaller steady-state value of the square error than the LMS adaptive digital filter, the adaptive digital filter based on the simple genetic algorithm and the original EDF.

  • Improving the Secure Electronic Transaction Protocol by Using Signcryption

    Goichiro HANAOKA  Yuliang ZHENG  Hideki IMAI  

     
    PAPER-Information Security

      Vol:
    E84-A No:8
      Page(s):
    2042-2051

    In the past few years, we have seen the emergence of a large number of proposals for electronic payments over open networks. Among these proposals is the Secure Electronic Transaction (SET) protocol promoted by MasterCard and VISA which is currently being deployed world-widely. While SET has a number of advantages over other proposals in terms of simplicity and openness, there seems to be a consensus regarding the relative inefficiency of the protocol. This paper proposes a light-weight version of the SET protocol, called "LITESET. " For the same level of security as recommended in the latest version of SET specifications, LITESET yields a 56.2/51.4% reduction in the computational time in message generation/verification and a 79.9% reduction in communication overhead. This has been achieved by the use of a new cryptographic primitive called signcryption. We hope that our proposal can contribute to the practical and engineering side of real-world electronic payments.

  • Concept and Evaluation of a 2-D FDTD Formulation Based on Expanded Wave Equation Approach

    Koichi ICHIGE  Hiroyuki ARAI  

     
    PAPER-Electromagnetic Theory

      Vol:
    E84-C No:7
      Page(s):
    981-993

    This paper presents a novel concept of a Two-Dimensional (2-D) Finite-Difference Time-Domain (FDTD) formulation for the numerical analysis of electromagnetic fields. FDTD method proposed by Yee is widely used for such analysis, although it has an inherent problem that there exist half-cell-length and half-time-step distances between electric and magnetic field components. To dissolve such distances, we begin with the finite-difference approximation of the wave equation, not Maxwell's equations. Employing several approximation techniques, we develop a novel algorithm which can condense all field components to equidistant discrete nodes. The proposed algorithm is evaluated in comparison with several conventional algorithms by computer simulations.

  • Constructing Voronoi Diagrams in the L1 Metric Using the Geographic Nearest Neighbors

    Youngcheul WEE  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E84-A No:7
      Page(s):
    1755-1760

    This paper introduces a new approach based on the geographic nearest neighbors for constructing the Delaunay triangulation (a dual of the Voronoi diagram) of a set of n sites in the plane under the L1 metric. In general, there is no inclusion relationship between the Delaunay triangulation and the octant neighbor graph. We however find that under the L1 metric the octant neighbor graph contains at least one edge of each triangle in the Delaunay triangulation. By using this observation and employing a range tree scheme, we design an algorithm for constructing the Delaunay triangulation (thus the Voronoi diagram) in the L1 metric. This algorithm takes O(n log n) sequential time for constructing the Delaunay triangulation in the L1 metric. This algorithm can easily be parallelized, and takes O(log n) time with O(n) processors on a CREW-PRAM.

  • 3D Acoustic Image Localization Algorithm by Embedded DSP

    Wataru KOBAYASHI  Noriaki SAKAMOTO  Takao ONOYE  Isao SHIRAKAWA  

     
    PAPER

      Vol:
    E84-A No:6
      Page(s):
    1423-1430

    This paper describes a realtime 3D sound localization algorithm to be implemented with the use of a low power embedded DSP. A distinctive feature of this implementation approach is that the audible frequency band is divided into three, in accordance with the analysis of the sound reflection and diffraction effects through different media from a certain sound source to human ears. In the low, intermediate, and high frequency subbands, different schemes of the 3D sound localization are devised by means of an IIR filter, parametric equalizers, and a comb filter, respectively, so as to be run realtime on a low power embedded DSP. This algorithm aims at providing a listener with the 3D sound effects through headphones at low cost and low power consumption.

  • On Detecting Digital Line Components in a Binary Image

    Tetsuo ASANO  Koji OBOKATA  Takeshi TOKUYAMA  

     
    PAPER

      Vol:
    E84-A No:5
      Page(s):
    1120-1129

    This paper addresses the problem of detecting digital line components in a given binary image consisting of n black dots arranged over N N integer grids. The most popular method in computer vision for this purpose is the one called Hough Transform which transforms each black point to a sinusoidal curve to detect digital line components by voting on the dual plane. We start with a definition of a line component to be detected and present several different algorithms based on the definition. The one extreme is the conventional algorithm based on voting on the subdivided dual plane while the other is the one based on topological walk on an arrangement of sinusoidal curves defined by the Hough transform. Some intermediate algorithm based on half-planar range counting is also presented. Finally, we discuss how to incorporate several practical conditions associated with minimum density and restricted maximality.

  • An Efficient Implementation Method of a Metric Computation Accelerator for Fractal Image Compression Using Reconfigurable Hardware

    Hidehisa NAGANO  Akihiro MATSUURA  Akira NAGOYA  

     
    LETTER-VLSI Design Technology and CAD

      Vol:
    E84-A No:1
      Page(s):
    372-377

    This paper proposes a method for implementing a metric computation accelerator for fractal image compression using reconfigurable hardware. The most time-consuming part in the encoding of this compression is computation of metrics among image blocks. In our method, each processing element (PE) configured for an image block accelerates these computations by pipeline processing. Furthermore, by configuring the PE for a specific image block, we can reduce the number of adders, which are the main computing elements, by a half even in the worst case.

  • On the Complexity of Constructing an Elliptic Curve of a Given Order

    Masato YAMAMICHI  Masahiro MAMBO  Hiroki SHIZUYA  

     
    PAPER

      Vol:
    E84-A No:1
      Page(s):
    140-145

    Can we find in polynomial time an elliptic curve of a given order over a finite field? This paper is concerned with this question which is open since 1986. Consider the partial multivalued function that outputs such an elliptic curve. We characterize the difficulty of computing this function, and show that the polynomial time hierarchy collapses if sat reduces to this function with respect to the polynomial time Turing reducibility, where sat is the partial multivalued function that on input a Boolean formula, outputs a satisfying assignment. We also give a problem that is equivalent to the open question under the Extended Riemann Hypothesis.

  • Head Tissue Heterogeneity Required in Computational Dosimetry for Portable Telephones

    Jianqing WANG  Osamu FUJIWARA  

     
    PAPER-Electromagnetic Compatibility(EMC)

      Vol:
    E84-B No:1
      Page(s):
    100-105

    The head tissue heterogeneity required in the spatial peak specific absorption rate (SAR) assessment for portable telephones was investigated by using the FDTD method in conjunction with an MRI-based human head model. The tissue heterogeneity of the head model was changed from one type of tissue to 17 types of tissue. The results showed that, at 900 MHz and 2 GHz, the homogeneous modeling results in an underestimate about 20% for the λ/2 monopole antenna portable telephones and an overestimate to the same extent for the λ/4 monopole or helical antenna portable telephones. A head model with a simple skin-fat-muscle-bone-brain structure seems to be sufficient to obtain a fairly accurate one-gram or ten-gram averaged spatial peak SAR value in computational dosimetry for portable telephone compliance.

  • Sublogarithmic Space-Bounded Multi-Inkdot Two-Way Alternating Turing Machines with Only Universal States

    Tsunehiro YOSHINAGA  Katsushi INOUE  

     
    LETTER

      Vol:
    E84-D No:1
      Page(s):
    61-64

    This paper investigates a hierarchical property based on the number of inkdots in the accepting powers of sublogarithmic space-bounded multi-inkdot two-way alternating Turing machines with only universal states. For each k1 and any function L(n), let strong-2UTMk(L(n)) (weak-2UTMk(L(n))) be the class of sets accepted by strongly (weakly) L(n) space-bounded k-inkdot two-way alternating Turing machines with only universal states. We show that for each k1, strong-2UTMk+1(log log n) - weak-2UTMk(o(log n)) Ø.

  • Computational Complexity of Finding Highly Co-occurrent Itemsets in Market Basket Databases

    Yeon-Dae KWON  Yasunori ISHIHARA  Shougo SHIMIZU  Minoru ITO  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E83-A No:12
      Page(s):
    2723-2735

    Data mining is to analyze all the data in a huge database and to obtain useful information for database users. One of the well-studied problems in data mining is the search for meaningful association rules in a market basket database which contains massive amounts of sales transactions. The problem of mining meaningful association rules is to find all the large itemsets first, and then to construct meaningful association rules from the large itemsets. In our previous work, we have shown that it is NP-complete to decide whether there exists a large itemset with a given size. Also, we have proposed a subclass of databases, called k-sparse databases, for which we can efficiently find all the large itemsets. Intuitively, k-sparsity of a database means that the supports of itemsets of size k or more are sufficiently low in the database. In this paper, we introduce the notion of (k,c)-sparsity, which is strictly weaker than the k-sparsity in our previous work. The value of c represents a degree of sparsity. Using (k,c)-sparsity, we propose a larger subclass of databases for which we can still efficiently find all the large itemsets. Next, we propose alternative measures to the support. For each measure, an itemset is called highly co-occurrent if the value indicating the correlation among the items exceeds a given threshold. In this paper, we define the highly co-occurrent itemset problem formally as deciding whether there exists a highly co-occurrent itemset with a given size, and show that the problem is NP-complete under whichever measure. Furthermore, based on the notion of (k,c)-sparsity, we propose subclasses of databases for which we can efficiently find all the highly co-occurrent itemsets.

  • High Level Analysis of Clock Regions in a C++ System Description

    Luc RYNDERS  Patrick SCHAUMONT  Serge VERNALDE  Ivo BOLSENS  

     
    LETTER-High-level Synthesis

      Vol:
    E83-A No:12
      Page(s):
    2631-2632

    Timing verification of digital synchronous designs is a complex process that is traditionally carried out deep in the design cycle, at the gate level. A method, embodied in a C++ based design system, is presented that allows modeling and verification of clock regions at a higher level. By combining event-driven, clock-cycle true and behavioral simulation, we are able to perform static and dynamic timing analysis of the clock regions.

  • The Optimal Sectionalized Trellises for the Generalized Version of Viterbi Algorithm of Linear Block Codes and Its Application to Reed-Muller Codes

    Yuansheng TANG  Toru FUJIWARA  Tadao KASAMI  

     
    PAPER-Coding Theory

      Vol:
    E83-A No:11
      Page(s):
    2329-2340

    An algorithm for finding the optimal sectionalization for sectionalized trellises with respect to distinct optimality criterions was presented by Lafourcade and Vardy. In this paper, for linear block codes, we give a direct method for finding the optimal sectionalization when the optimality criterion is chosen as the total number |E| of the edges, the expansion index |E|-|V|+1, or the quantity 2|E|-|V|+1, only using the dimensions of the past and future sub-codes. A more concrete method for determining the optimal sectionalization is given for the Reed-Muller codes with the natural lexicographic coordinate ordering.

  • Evolutionary Synthesis of Fast Constant-Coefficient Multipliers

    Naofumi HOMMA  Takafumi AOKI  Tatsuo HIGUCHI  

     
    PAPER-Nonlinear Problems

      Vol:
    E83-A No:9
      Page(s):
    1767-1777

    This paper presents an efficient graph-based evolutionary optimization technique called Evolutionary Graph Generation (EGG), and its application to the design of fast constant-coefficient multipliers using parallel counter-tree architecture. An important feature of EGG is its capability to handle the general graph structures directly in evolution process instead of encoding the graph structures into indirect representations, such as bit strings and trees. This paper also addresses the major problem of EGG regarding the significant computation time required for verifying the function of generated circuits. To solve this problem, a new functional verification technique for arithmetic circuits is proposed. It is demonstrated that the EGG system can create efficient multiplier structures which are comparable or superior to the known conventional designs.

  • Practical Inverse Modeling with SIESTA

    Rudolf STRASSER  Siegfried SELBERHERR  

     
    PAPER-Simulation Methodology and Environment

      Vol:
    E83-C No:8
      Page(s):
    1303-1310

    We present a simulation system which meets the requirements for practical application of inverse modeling in a professional environment. A tool interface for the integration of arbitrary simulation tools at the user level is introduced and a methodology for the formation of simulation networks is described. A Levenberg-Marquardt optimizer automates the inverse modeling procedure. Strategies for the efficient execution of simulation tools are discussed. An example demonstrates the extraction of doping profile information on the basis of electrical measurements.

  • Private Communications with Chaos Based on the Fixed-Point Computation

    Hiroyuki KAMATA  Yohei UMEZAWA  Masamichi DOBASHI  Tetsuro ENDO  Yoshihisa ISHIDA  

     
    PAPER-Information Security

      Vol:
    E83-A No:6
      Page(s):
    1238-1246

    This paper proposes a private communication system with chaos using fixed-point digital computation. When fixed-point computation is adopted, chaotic properties of the modulated signal should be checked carefully as well as calculation error problems (especially, overflow problems). In this paper, we propose a novel chaos modem system for private communications including a chaotic neuron type nonlinearity, an unstable digital filter and an overflow function. We demonstrate that the modulated signal reveals hyperchaotic property within 10,000 data point fixed-point computation, and evaluate the security of this system in view of the sensitivity of coefficients for demodulation.

  • NP-Completeness of Reallocation Problems with Restricted Block Volume

    Hiroyoshi MIWA  Hiro ITO  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    590-597

    A reallocation problem is defined as determining whether blocks can be moved from their current boxes to their destination boxes in the number of moves less than or equal to a given positive integer. This problem is defined simply and has many practical applications. We previously proved the following results: The reallocation problem such that the block volume is restricted to one volume unit for all blocks can be solved in linear time. But the reallocation problem such that the block volume is not restricted is NP-complete, even if no blocks have bypass boxes. But the computational complexity of the reallocation problems such that the range of the block volume is restricted has not yet been known. We prove that the reallocation problem is NP-complete even if the block volume is restricted to one or two volume units for all blocks and no blocks have bypass boxes. Furthermore, we prove that the reallocation problem is NP-complete, even if there are only two blocks such that each block has the volume units of fixed positive integer greater or equal than two, the volume of the other blocks is restricted to one volume unit, and no blocks have bypass boxes. Next, we consider such a block that must stays in a same box both in the initial state and in the final state but can be moved to another box for making room for other blocks. We prove that the reallocation problem such that an instance has such blocks is also NP-complete even if the block volume is restricted to one volume unit for all blocks.

  • Non-interactive and Optimally Resilient Distributed Multiplication

    Masayuki ABE  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    598-605

    This paper presents a non-interactive and optimally resilient distributed multiplication scheme. By non-interactive we mean that the players need to use outgoing communication channels only once without the need to synchronize with the other players as long as no disruption occurs. Our protocol withstands corrupt players up to less than the half of the players, so it provides optimal resiliency. Furthermore, the shared secrets are secure even against infinitely powerful adversaries. The security is proven under the intractability assumption of the discrete logarithm problem. Those properties are achieved by using an information theoretically secure non-interactive verifiable secret sharing as a kind of non-interactive proof system between a single prover and distributed verifiers. Compared to a former interactive solution in the same setting, the cost is an increase in local computation and communication complexity that is determined by the factor of the threshold used in the verifiable secret sharing.

  • A Parallel Algorithm for Constructing Strongly Convex Superhulls of Points

    Carla Denise CASTANHO  Wei CHEN  Koichi WADA  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    722-732

    Let S be a set of n points in the plane and CH(S) be the convex hull of S. We consider the problem of constructing an approximate convex hull which contains CH(S) with strong convexity. An ε-convex δ-superhull of S is a convex polygon P satisfying the following conditions: (1) P has at most O(n) vertices, (2) P contains S, (3) no vertex of P lies farther than δ outside CH(S), and (4) P remains convex even if its vertices are perturbed by as much as ε. The parameters ε and δ represent the strength of convexity of P and the degree of approximation of P to CH(S), respectively. This paper presents the first parallel method for the problem. We show that an ε-convex (8+42)ε-superhull of S can be constructed in O(log n) time using O(n) processors, or in O(log n) time using O(n/log n) processors if S is sorted, in the EREW-PRAM model. We implement the algorithm and find that the average performance is even much better: the results are more strongly convex and much more approximate to CH(S) than the theoretical analysis shows.

  • Detection of Conserved Domains in Protein Sequences Using a Maximum-Density Subgraph Algorithm

    Hideo MATSUDA  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    713-721

    In this paper, we propose a method for detecting conserved domains from a set of amino acid sequences that belong to a protein family. This method detects the domains as follows: first, generate fixed-length subsequences from the sequences; second, construct a weighted graph that connects any two of the subsequences (vertices) having higher similarity than a pre-defined threshold; third, search for the maximum-density subgraph for each connected component of the graph; finally, explore conserved domains in the sequences by combining the results of the previous step. From the performance results obtained by applying the method to several protein families that have complex conserved domains, we found that our method was able to detect those domains even though some domains were weakly conserved.

341-360hit(490hit)