The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] complex(623hit)

521-540hit(623hit)

  • False Drop Analysis of Set Retrieval with Signature Files

    Hiroyuki KITAGAWA  Yoshiharu ISHIKAWA  

     
    PAPER-Databases

      Vol:
    E80-D No:6
      Page(s):
    653-664

    Modern database systems have to support complex data objects, which appear in advanced data models such as object-oriented data models and nested relational data models. Set-valued objects are basic constructs to build complex structures in those models. Therefore, efficient processing of set-valued object retrieval (simply, set retrieval) is an important feature required of advanced database systems. Our previous work proposed a basic scheme to apply superimposed coded signature files to set retrieval and showed its potential advantages over the B-tree index based approach using a performance analysis model. Retrieval with signature files is always accompanied by mismatches called false drops, and proper control of the false drops is indispensable in the signature file design. This study intensively analyzes the false drops in set retrieval with signature files. First, schemes to use signature files are presented to process set retrieval involving "has-subset," "is-subset," "has-intersection," and "is-equal" predicates, and generic formulas estimating the false drops are derived. Then, three sets of concrete formulas are derived in three ways to estimate the false drops in the four types of set retrieval. Finally, their estimates are validated with computer simulations, and advantages and disadvantages of each set of the false drop estimation formulas are discussed. The analysis shows that proper choice of estimation formulas gives quite accurate estimates of the false drops in set retrieval with signature files.

  • A Uniform Asymptotic Expression for the Function Arising in the Wedge Scattering Problem

    Masao KODAMA  Hideomi TAKAHASHI  Kengo TAIRA  

     
    LETTER-Electromagnetic Theory

      Vol:
    E80-C No:6
      Page(s):
    831-833

    Scattering of a plane electromagnetic wave by a conducting wedge will be discussed. The former solution can not be applicable to all the transition regions when its parameter is constant. This study shows a new solution which consists of only one expression applicable to the shadow region, the illuminated region and the transition regions, and which has no parameter.

  • Linear Complexity of Periodic Sequences Obtained from a Sequence over GF(p) with Period pn-1 by One-Symbol Deletion

    Satoshi UEHARA  Kyoki IMAMURA  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E80-A No:6
      Page(s):
    1164-1166

    From a sequence {ai}i0 over GF(p) with period pn-1 we can obtain another periodic sequence {i}i0 with period pn-2 by deleting one symbol at the end of each period. We will give the bounds (upper bound and lower bound) of linear complexity of {i}i0 as a typical example of instability of linear complexity. Derivation of the bounds are performed by using the relation of characteristic polynomials between {ai}i0 and {ai(j)}i0={ai+j}i0, jGF(p){0}. For a binary m-sequence {ai}i0 with period 2n-1, n-1 a prime, we will give the explicit formula for the characteristic polynomial of {i}i0.

  • Syntactic Unification Problems under Constrained Substitutions

    Kazuhiro TAKADA  Yuichi KAJI  Tadao KASAMI  

     
    PAPER-Automata,Languages and Theory of Computing

      Vol:
    E80-D No:5
      Page(s):
    553-561

    Some kind of practical problems such as security verification of cryptographic protocols can be described as a problem to accomplish a given purpose by using limited operations and limited materials only. To model such problems in a natural way, unification problems under constrained substitutions have been proposed. This paper is a collection of results on the decidability and the computational complexity of a syntactic unification problem under constrained substitutions. A number of decidable, undecidable, tractable and intractable results of the problem are presented. Since a unification problem under constrained substitutions can be regarded as an order-sorted unification problem with term declarations such that the number of sorts is only one, the results presented in this paper also indicate how the intractability of order-sorted unification problems is reduced by restecting the number of sorts to one.

  • Value Distribution of Linear Complexity for p-Ary Periodic Sequences with Period pn, p a Prime

    Satoshi UEHARA  Kyoki IMAMURA  Takayasu KAIDA  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E80-A No:5
      Page(s):
    920-921

    Firstly we show a usuful property of the fast algorithm for computing linear complexities of p-ary periodic sequences with period pn (p: a prime). Secondly the property is successfully applied to obtain the value distribution of the linear complexity for p-ary periodic sequences with period pn.

  • Computational Power of Nondeterministic Ordered Binary Decision Diagrams and Their Subclasses

    Kazuyoshi TAKAGI  Koyo NITTA  Hironori BOUNO  Yasuhiko TAKENAGA  Shuzo YAJIMA  

     
    PAPER

      Vol:
    E80-A No:4
      Page(s):
    663-669

    Ordered Binary Decision Diagrams (OBDDs) are graph-based representations of Boolean functions which are widely used because of their good properties. In this paper, we introduce nondeterministic OBDDs (NOBDDs) and their restricted forms, and evaluate their expressive power. In some applications of OBDDs, canonicity, which is one of the good properties of OBDDs, is not necessary. In such cases, we can reduce the required amount of storage by using OBDDs in some non-canonical form. A class of NOBDDs can be used as a non-canonical form of OBDDs. In this paper, we focus on two particular methods which can be regarded as using restricted forms of NOBDDs. Our aim is to show how the size of OBDDs can be reduced in such forms from theoretical point of view. Firstly, we consider a method to solve satisfiability problem of combinational circuits using the structure of circuits as a key to reduce the NOBDD size. We show that the NOBDD size is related to the cutwidth of circuits. Secondly, we analyze methods that use OBDDs to represent Boolean functions as sets of product terms. We show that the class of functions treated feasibly in this representation strictly contains that in OBDDs and contained by that in NOBDDs.

  • Polynomials Approximating Complex Functions

    Masao KODAMA  Kengo TAIRA  

     
    LETTER-Numerical Analysis and Optimization

      Vol:
    E80-A No:4
      Page(s):
    778-781

    We frequently use a polynomial to approximate a complex function. This study shows a method which determines the optimum coefficients and the number of terms of the polynomial, and the error of the polynomial is estimated.

  • The Largest Common Similar Substructure Problem

    Shaoming LIU  Eiichi TANAKA  

     
    PAPER

      Vol:
    E80-A No:4
      Page(s):
    643-650

    This paper discusses the largest common similar substructure (in short, LCSS) problem for trees. The problem is, for all pairs of "substructure of A and that of B," to find one of them, denoted by A and B', such that A is most similar to B' and the sum of the number of vertices of A and that of B' is largest. An algorithm for the LCSS problem for unrooted and unordered trees (in short, trees) and that for trees embedded in a plane (in short, Co-trees) are proposed. The time complexity of the algorithm for trees is O (max (ma, mb)2 NaNb) and that for CO-trees is O (mambNaNb), where, ma (mb) and Na (Nb) are the largest degree of a vertex of tree Ta (Tb) and the number of vertices of Ta (Tb), respectively. It is easy to modify the algorithms for enumerating all of the LCSSs for trees and CO-trees. The algorithms can be applied to structure-activity studies in chemistry and various structure comparison problems.

  • An n3u Upper Bound on the Complexity for Deciding the Truth of a Presburger Sentence Involving Two Variables Bounded Only by Existential Quantifiers

    Kuniaki NAOI  Naohisa TAKAHASHI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E80-D No:2
      Page(s):
    223-231

    We show that the truth of a prenex normal form Presburger sentence bounded only by existential quantifiers (or an EPP-sentence) involving two variables can be decided in deterministic polynomial time. Specifically, an upper bound of the computation for the decision is O(n3u), where n is the number of atoms of the EPP-sentence, and u is the largest absolute value of all coefficients in the EPP-sentence. In the analysis for the upper bound, the random access machine is assumed for the machine model. Additionally, a uniform cost criterion is assumed. Deciding the truth of an EPP-sentence is an NP-complete problem, when the number of variables is not fixed. Furthermore, whether the truth of an EPP-sentence involving two or more variables can be decided in deterministic polynomial time, when the number of variables is fixed, or not has remained an open problem. We previously proposed a procedure for quickly deciding the truth of an EPP-sentence on the basis of a suggestion by D.C.Cooper. We found the upper bound by analyzing the decision procedure. The procedure can be applied to both automated correctness proof of specification in various design fields and detection of infeasible paths in a program. In the procedure, a matrix denoting coefficients of the variables in the EPP-sentence is triangulated.

  • Computing the Minkowski Sum of Monotone Polygons

    Antonio HERNAN'DEZ-BARRERA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E80-D No:2
      Page(s):
    218-222

    This paper presents algorithms for computing the Minkowski sum of two polygons P and Q for a family of problems. For P being convex and Q being monotone, an algorithm is given with O (nm) time and space complexity. For both P and Q being monotone polygons, an O (nm log nm) time algorithm is presented and it is shown that the complexity of the sum is Θ (nmα(min(n,m))) in the worst case, where α() is the inverse of Ackermann's function. Finally, an O ((nm+k)log nm) time complexity algorithm is given when P is monotone and Q is simple, where k in the worst case could be Θ(n2m). The complexity of P Q is shown to be Θ(n2m) in the worst case. Here, m and n denote the number of edges of P and Q, respectively.

  • The Complexity of Threshold Circuits for Parity Functions

    Shao-Chin SUNG  Tetsuro NISHINO  

     
    LETTER-Algorithm and Computational Complexity

      Vol:
    E80-D No:1
      Page(s):
    91-93

    In this paper, we show that a parity function with n variables can be computed by a threshold circuit of depth O((log n)/c) and size O((2clog n)/c), for all 1c [log(n+1)]-1. From this construction, we obtain an O(log n/log log n) upper bound for the depth of polylogarithmic size threshold circuits for parity functions. By using the result of Impagliazzo, Paturi and Saks[5], we also show an Ω (log n/log log n) lower bound for the depth of the threshold circuits. This is an answer to the open question posed in [11].

  • On Multi-Inkdot Two-Way Alternating Turing Machines and Pushdown Automata with Sublogarithmic Space and Constant Leaf-Size

    Jianliang XU  Katsushi INOUE  Yue WANG  Akira ITO  

     
    LETTER-Automata,Languages and Theory of Computing

      Vol:
    E80-D No:1
      Page(s):
    86-90

    This paper investigates the accepting powers of multi-inkdot two-way alternating pushdown automata (Turing machines) with sublogarithmic space and constant leaf-size. For each k1, and each m0, let weak-ASPACEm [L(n),k] denote the class of languages accepted by simultaneously weakly L(n) space-bounded and k leaf-bounded m-inkdot two-way alternating Turing machines, and let strong-2APDAm[L(n),k] denote the class of languages accepted by simultaneously strongly L(n) space-bounded and k leaf-bounded m-inkdot two-way alternating pushdown automata. We show that(1) strong-2APDAm [log log n,k+1]weak-ASPACEm[o(log n),k]φfor each k1 and each m1, and(2) strong-2APDA(m+1) [log log n,k]weak-ASPACEm[o(log n),k]φfor each k1 and each m0.

  • Characteristic Polynomials of Binary Complementary Sequences

    Satoshi UEHARA  Kyoki IMAMURA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E80-A No:1
      Page(s):
    193-196

    Recently two interesting conjectures on the linear complexity of binary complementary sequences of length 2nN0 were given by Karkkainen and Leppanen when those sequences are considered as periodic sequences with period 2nN0, where those sequences are constructed by successive concatenations or successive interleavings from a pair of kernel complementary sequences of length N0. Their conjectures were derived from numerical examples and suggest that those sequences have very large linear complexities. In this paper we give the exact formula of characteristic polynomials for those complementary sequences and show that their conjectures are true.

  • Generalized Reed-Muller Expressions: Complexity and an Exact Minimization Algorithm

    Tsutomu SASAO  Debatosh DEBNATH  

     
    PAPER

      Vol:
    E79-A No:12
      Page(s):
    2123-2130

    A generalized Reed-Muller expression (GRM) is obtained by negating some of the literals in a positive polarity Reed-Muller expression (PPRM). There are at most 2(n2)^(n-1) different GRMs for an n-variable function. A minimum GRM is one with the fewest products. This paper presents certain properties and an exact minimization algorithm for GRMs. The minimization algorithm uses binary decision diagrams. Up to five variables, all the representative functions of NP-equivalence classes were generated and minimized. Tables compare the number of products necessary to represent four-and five-variable functions for four classes of expressions: PPRMs, FPRMs, GRMs and SOPs. GRMs require, on the average, fewer products than sum-of-products expressions (SOPs), and have easily testable realizations.

  • On Simple One-Way Multihead Pushdown Automata

    Yue WANG  Katsushi INOUE  Akira ITO  

     
    PAPER-Automata,Languages and Theory of Computing

      Vol:
    E79-D No:12
      Page(s):
    1613-1619

    In [2] Ibarra introduced a restricted version of one-way multihead pushdown automaton (PDA), called a simple one-way multihead PDA, and showed that such machines recognize only languages with semilinear property. The main result of this paper is that for each k 1, simple (sensing) one-way (k + 1)-head PDA's are more powerful than simple (sensing) one-way k-head PDA's. This paper also investigates closure properties for simple (sensing) one-way multihead PDA's

  • On the Power of Reversals Over the Input Tape of Off-Line Turing Machines

    Hiroaki YAMAMOTO  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E79-D No:11
      Page(s):
    1495-1502

    For reversal complexity on an off-line Turing machine, which is a Turing machine with a read-only two-way input tape except work-tapes, we can consider two kinds of definition; the first one is a definition in which the number of reversals over the input tape is not counted, and the second one is a definition in which it is counted. Unlike time and space complexities, whether or not there is any difference between these two definitions does not seem to be trivial. In this paper, we will show the following results: (1) let S(n) be any function, and R(n) be an (R(n), S(n)) reversal-space constructible function. Then, DRESPk(R(n), S(n)) IDRESPk+2(R(n) + log(nS(n)), n2R(n)S(n)), (2) let R(n) and S(n) be any functions. Then, NRESPk(R(n), S(n)) INRESPk+1(R(n), n2S(n)), and ARESPk(R(n), S(n)) = IARESPk(R(n), S(n)), where DRESP denotes a deterministic reversal- and space-bounded class under the definition disregarding reversals over the input tape, and IDRESP denotes a deterministic reversal- and space-bounded class under the definition counting it. The suffix k denotes the number of work-tapes. The classes NRESP, INRESP, ARESP and IARESP are also defined similarly for NTMs and ATMs.

  • Reduction of Computational Complexity in the IA Algorithm

    Isao NAKANISHI  Yoshio ITOH  Yutaka FUKUI  

     
    LETTER-Digital Signal Processing

      Vol:
    E79-A No:11
      Page(s):
    1918-1921

    For reduction of computational complexity in the IA algorithm, the thinned-out IA algorithm in which only one step size is updated every iteration is proposed and is complementarily switched with the HA algorithm according to the convergence. The switching is determined by using the gradient of the error signal power. These are investigated through the computer simulations.

  • Linear Complexity of Periodic Sequences Obtained from GF(q) Sequences with Period qn-1 by One-Symbol Insertion

    Satoshi UEHARA  Kyoki IMAMURA  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E79-A No:10
      Page(s):
    1739-1740

    From a GF(q) sequence {ai}i0 with period qn - 1 we can obtain new periodic sequences {ai}i0 with period qn by inserting one symbol b GF(q) at the end of each period. Let b0 = Σqn-2 i=0 ai. It Is first shown that the linear complexity of {ai}i0, denoted as LC({ai}) satisfies LC({ai}) = qn if b -b0 and LC({ai}) qn - 1 if b = -b0 Most of known sequences are shown to satisfy the zero sum property, i.e., b0 = 0. For such sequences satisfying b0 = 0 it is shown that qn - LC({ai}) LC({ai}) qn - 1 if b = 0.

  • Dual Quantity of the Distortion-Complexity and a Universal Data-Base for Fixed-Rate Data Compression with Distortion

    Jun MURAMATSU  Fumio KANAYA  

     
    LETTER-Source Coding

      Vol:
    E79-A No:9
      Page(s):
    1456-1459

    In this paper, we define the distortion at a certain complexity level, which is the dual quantity of the distortion-complexity. We prove a theorem dual to the theorem which we have given of the asymptotic property of the distortion-complexity. We also give a universal data-base for fixed-rate data compression with distortion and prove its asymptotic optimality.

  • A Relationship between the Number of Negations and the Circuit Size

    Keisuke TANAKA  Tetsuro NISHINO  

     
    LETTER-Algorithm and Computational Complexity

      Vol:
    E79-D No:9
      Page(s):
    1355-1357

    We show a relationship between the number of negations in circuits and the size of circuits. More precisely, we construct a Boolean function Hn, and show that there exists an integer t, which can range over only two different values, such that the removal of one NEGATION gate causes an exponential growth of the optimal circuit size for Hn.

521-540hit(623hit)