This paper attempts to establish a theory for a general auto-associative memory model. We start by defining a new concept called supporting function to replace the concept of energy function. As known, the energy function relies on the assumption of symmetric interconnection weights, which is used in the conventional Hopfield auto-associative memory, but not evidenced in any biological memories. We then formulate the information retrieving process as a dynamic system by making use of the supporting function and derive the attraction or asymptotic stability condition and the condition for convergence of an arbitrary state to a desired state. The latter represents a key condition for associative memory to have a capability of learning from variant samples. Finally, we develop an algorithm to learn the asymptotic stability condition and an algorithm to train the system to recover desired states from their variant samples. The latter called sample learning algorithm is the first of its kind ever been discovered for associative memories. Both recalling and learning processes are of finite convergence, a must-have feature for associative memories by analogy to normal human memory. The effectiveness of the recalling and learning algorithms is experimentally demonstrated.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Hongchi SHI, Yunxin ZHAO, Xinhua ZHUANG, Fuji REN, "GAM: A General Auto-Associative Memory Model" in IEICE TRANSACTIONS on Information,
vol. E85-D, no. 7, pp. 1153-1164, July 2002, doi: .
Abstract: This paper attempts to establish a theory for a general auto-associative memory model. We start by defining a new concept called supporting function to replace the concept of energy function. As known, the energy function relies on the assumption of symmetric interconnection weights, which is used in the conventional Hopfield auto-associative memory, but not evidenced in any biological memories. We then formulate the information retrieving process as a dynamic system by making use of the supporting function and derive the attraction or asymptotic stability condition and the condition for convergence of an arbitrary state to a desired state. The latter represents a key condition for associative memory to have a capability of learning from variant samples. Finally, we develop an algorithm to learn the asymptotic stability condition and an algorithm to train the system to recover desired states from their variant samples. The latter called sample learning algorithm is the first of its kind ever been discovered for associative memories. Both recalling and learning processes are of finite convergence, a must-have feature for associative memories by analogy to normal human memory. The effectiveness of the recalling and learning algorithms is experimentally demonstrated.
URL: https://global.ieice.org/en_transactions/information/10.1587/e85-d_7_1153/_p
Copy
@ARTICLE{e85-d_7_1153,
author={Hongchi SHI, Yunxin ZHAO, Xinhua ZHUANG, Fuji REN, },
journal={IEICE TRANSACTIONS on Information},
title={GAM: A General Auto-Associative Memory Model},
year={2002},
volume={E85-D},
number={7},
pages={1153-1164},
abstract={This paper attempts to establish a theory for a general auto-associative memory model. We start by defining a new concept called supporting function to replace the concept of energy function. As known, the energy function relies on the assumption of symmetric interconnection weights, which is used in the conventional Hopfield auto-associative memory, but not evidenced in any biological memories. We then formulate the information retrieving process as a dynamic system by making use of the supporting function and derive the attraction or asymptotic stability condition and the condition for convergence of an arbitrary state to a desired state. The latter represents a key condition for associative memory to have a capability of learning from variant samples. Finally, we develop an algorithm to learn the asymptotic stability condition and an algorithm to train the system to recover desired states from their variant samples. The latter called sample learning algorithm is the first of its kind ever been discovered for associative memories. Both recalling and learning processes are of finite convergence, a must-have feature for associative memories by analogy to normal human memory. The effectiveness of the recalling and learning algorithms is experimentally demonstrated.},
keywords={},
doi={},
ISSN={},
month={July},}
Copy
TY - JOUR
TI - GAM: A General Auto-Associative Memory Model
T2 - IEICE TRANSACTIONS on Information
SP - 1153
EP - 1164
AU - Hongchi SHI
AU - Yunxin ZHAO
AU - Xinhua ZHUANG
AU - Fuji REN
PY - 2002
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E85-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2002
AB - This paper attempts to establish a theory for a general auto-associative memory model. We start by defining a new concept called supporting function to replace the concept of energy function. As known, the energy function relies on the assumption of symmetric interconnection weights, which is used in the conventional Hopfield auto-associative memory, but not evidenced in any biological memories. We then formulate the information retrieving process as a dynamic system by making use of the supporting function and derive the attraction or asymptotic stability condition and the condition for convergence of an arbitrary state to a desired state. The latter represents a key condition for associative memory to have a capability of learning from variant samples. Finally, we develop an algorithm to learn the asymptotic stability condition and an algorithm to train the system to recover desired states from their variant samples. The latter called sample learning algorithm is the first of its kind ever been discovered for associative memories. Both recalling and learning processes are of finite convergence, a must-have feature for associative memories by analogy to normal human memory. The effectiveness of the recalling and learning algorithms is experimentally demonstrated.
ER -