1-6hit |
Masami TAKATA Hayaru SHOUNO Masato OKADA
Solving the error correcting code is an important goal with regard to communication theory. To reveal the error correcting code characteristics, several researchers have applied a statistical-mechanical approach to this problem. In our research, we have treated the error correcting code as a Bayes inference framework. Carrying out the inference in practice, we have applied the NMF (naive mean field) approximation to the MPM (maximizer of the posterior marginals) inference, which is a kind of Bayes inference. In the field of artificial neural networks, this approximation is used to reduce computational cost through the substitution of stochastic binary units with the deterministic continuous value units. However, few reports have quantitatively described the performance of this approximation. Therefore, we have analyzed the approximation performance from a theoretical viewpoint, and have compared our results with the computer simulation.
In this paper, an associative memory model with a forgetting process proposed by Mezard et al. is investigated as a means of storing sparsely encoded patterns by the SCSNA proposed by Shiino and Fukai. Similar to the case of storing non-sparse (non-biased) patterns as analyzed by Mezard et al., this sparsely encoded associative memory model is also free from a catastrophic deterioration of the memory caused by memory pattern overloading. We theoretically obtain a relationship between the storage capacity and the forgetting rate, and find that there is an optimal forgetting rate leading to the maximum storage capacity. We call this the optimal storage capacity rate. As the memory pattern firing rate decreases, the optimal storage capacity increases and the optimal forgetting rate decreases. Furthermore, we shown that the capacity rate (i.e. the ratio of the storage capacity for the conventional correlation learning rule to the optimal storage capacity) is almost constant with respect to the memory pattern firing rate.
Kazushi MIMURA Masato OKADA Koji KURATA
In this paper, dependence of storage capacity of an analogue associative memory model using nonmonotonic neurons on static synaptic noise and static threshold noise is shown. This dependence is analytically calculated by means of the self-consistent signal-to-noise analysis (SCSNA) proposed by Shiino and Fukai. It is known that the storage capacity of an associative memory model can be improved markedly by replacing the usual sigmoid neurons with nonmonotonic ones, and the Hopfield model has theoretically been shown to be fairly robust against introducing the static synaptic noise. In this paper, it is shown that when the monotonicity of neuron is high, the storage capacity decreases rapidly according to an increase of the static synaptic noise. It is also shown that the reduction of the storage capacity is more sensitive to an increase in the static threshold noise than to the increase in the static synaptic noise.
Kazushi MIMURA Masato OKADA Koji KURATA
An associative memory model with a forgetting process a la Mezard et al. is investigated for a piecewise nonmonotonic output function by the SCSNA proposed by Shiino and Fukai. Similar to the formal monotonic two-state model analyzed by Mezard et al. , the discussed nonmonotonic model is also free from a catastrophic deterioration of memory due to overloading. We theoretically obtain a relationship between the storage capacity and the forgetting rate, and find that there is an optimal value of forgetting rate, at which the storage capacity is maximized for the given nonmonotonicity. The maximal storage capacity and capacity ratio (a ratio of the storage capacity for the conventional correlation learning rule to the maximal storage capacity) increase with nonmonotonicity, whereas the optimal forgetting rate decreases with nonmonotonicity.
Kazuho WATANABE Hiroyuki TANAKA Keiji MIURA Masato OKADA
The spike timings of neurons are irregular and are considered to be a one-dimensional point process. The Bayesian approach is generally used to estimate the time-dependent firing rate function from sequences of spike timings. It can also be used to estimate the firing rate from only a single sequence of spikes. However, the rate function has too many degrees of freedom in general, so approximation techniques are often used to carry out the Bayesian estimation. We applied the transfer matrix method, which efficiently computes the exact marginal distribution, to the estimation of the firing rate and developed an algorithm that enables the exact results to be obtained for the Bayesian framework. Using this estimation method, we investigated how the mismatch of the prior hyperparameter value affects the marginal distribution and the firing rate estimation.
Bayesian methods are often applied for estimating the event rate from a series of event occurrences. However, the Bayesian posterior distribution requires the computation of the marginal likelihood which generally involves an analytically intractable integration. As an event rate is defined in a very high dimensional space, it is computationally demanding to obtain the Bayesian posterior distribution for the rate. We estimate the rate underlying a sequence of event counts by deriving an approximate Bayesian inference algorithm for the time-varying binomial process. This enables us to calculate the posterior distribution analytically. We also provide a method for estimating the prior hyperparameter, which determines the smoothness of the estimated event rate. Moreover, we provide an efficient method to compute the upper and lower bounds of the marginal likelihood, which evaluate the approximation accuracy. Numerical experiments demonstrate the effectiveness of the proposed method in terms of the estimation accuracy.