The search functionality is under construction.

Keyword Search Result

[Keyword] probabilistic algorithm(6hit)

1-6hit
  • On Searching Maximal-Period Dynamic LFSRs With at Most Four Switches

    Lin WANG  Zhi HU  Deng TANG  

     
    LETTER

      Vol:
    E102-A No:1
      Page(s):
    152-154

    Dynamic linear feedback shift registers (DLFSRs) are a scheme to transfer from one LFSR to another. In cryptography each LFSR included in a DLFSR should generate maximal-length sequences, and the number of switches transferring LFSRs should be small for efficient performance. This corresponding addresses on searching such conditioned DLFSRs. An efficient probabilistic algorithm is given to find such DLFSRs with two or four switches, and it is proved to succeed with nonnegligible probability.

  • On the Probabilistic Computation Method with Reliability for the Weight Distribution of LDPC Codes

    Masanori HIROTOMO  Masami MOHRI  Masakatu MORII  

     
    PAPER-Coding Theory

      Vol:
    E95-A No:4
      Page(s):
    790-800

    In the analysis of maximum-likelihood decoding performance of low-density parity-check (LDPC) codes, the weight distribution is an important factor. We presented a probabilistic method for computing the weight distribution of LDPC codes, and showed results of computing the weight distribution of several LDPC codes. In this paper, we improve our previously presented method and propose a probabilistic computation method with reliability for the weight distribution of LDPC codes. Using the proposed method, we can determine the weight distribution with small failure probability.

  • Predicting DataSpace Retrieval Using Probabilistic Hidden Information

    Gile Narcisse FANZOU TCHUISSANG  Ning WANG  Nathalie Cindy KUICHEU  Francois SIEWE  De XU  Shuoyan LIU  

     
    LETTER-Data Engineering, Web Information Systems

      Vol:
    E93-D No:7
      Page(s):
    1991-1994

    This paper discusses the issues involved in the design of a complete information retrieval system for DataSpace based on user relevance probabilistic schemes. First, Information Hidden Model (IHM) is constructed taking into account the users' perception of similarity between documents. The system accumulates feedback from the users and employs it to construct user oriented clusters. IHM allows integrating uncertainty over multiple, interdependent classifications and collectively determines the most likely global assignment. Second, Three different learning strategies are proposed, namely query-related UHH, UHB and UHS (User Hidden Habit, User Hidden Background, and User Hidden keyword Semantics) to closely represent the user mind. Finally, the probability ranking principle shows that optimum retrieval quality can be achieved under certain assumptions. An optimization algorithm to improve the effectiveness of the probabilistic process is developed. We first predict the data sources where the query results could be found. Therefor, compared with existing approaches, our precision of retrieval is better and do not depend on the size and the DataSpace heterogeneity.

  • Probabilistic Inference by means of Cluster Variation Method and Linear Response Theory

    Kazuyuki TANAKA  

     
    PAPER-Algorithms

      Vol:
    E86-D No:7
      Page(s):
    1228-1242

    Probabilistic inference by means of a massive probabilistic model usually has exponential-order computational complexity. For such massive probabilistic model, loopy belief propagation was proposed as a scheme to obtain the approximate inference. It is known that the generalized loopy belief propagation is constructed by using a cluster variation method. However, it is difficult to calculate the correlation in every pair of nodes which are not connected directly to each other by means of the generalized loopy belief propagation. In the present paper, we propose a general scheme for calculating an approximate correlation in every pair of nodes in a probabilistic model for probabilistic inference. The general scheme is formulated by combining a cluster variation method with a linear response theory.

  • An Improved Method to Extract Quasi-Random Sequences from Generalized Semi-Random Sources

    Hiroaki YAMAMOTO  Hideo KASUGA  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E82-A No:3
      Page(s):
    512-519

    In this paper, we consider new and general models for imperfect sources of randomness, and show how to obtain quasi-random sequences from such sources. Intuitively, quasi-random sequences are sequences of almost unbiased elements over a finite set. Our model is as follows: Let A be a finite set whose number of elements is a power of 2. Let 1/|A| δ 1 be a constant. The source outputs an element on A with probability at most δ, depending on outputs made by itself so far. From the definition, our sources output at least two elements with nonzero probability. This model is very general, because the source may output only two elements of A with nonzero probability, and the other elements with probability 0. This ability becomes a big difficulty for generating quasi-random sequences. All the methods for the existing models such as PRB-models and δ-sources fail to generate quasi-random sequences from our models. We here give a new algorithms which generates almost unbiased elements over A from such models.

  • A Probabilistic Algorithm for Determining the Minimum Weight of Cyclic Codes

    Masami MOHRI  Masakatu MORII  

     
    LETTER-Coding Theory

      Vol:
    E81-A No:10
      Page(s):
    2170-2173

    A method is presented for determining the minimum weight of cyclic codes. It is a probabilistic algorithm. This algorithm is used to find, the minimum weight of codes far too large to be treated by any known algorithm. It is based on a probabilistic algorithm for determining the minimum weight of linear code by Jeffrey S. Leon. By using this method, the minimum weight of cyclic codes is computed efficiently.