Several models of feed-forward complex-valued neural networks have been proposed, and those with split and polar-represented activation functions have been mainly studied. Neural networks with split activation functions are relatively easy to analyze, but complex-valued neural networks with polar-represented functions have many applications but are difficult to analyze. In previous research, Nitta proved the uniqueness theorem of complex-valued neural networks with split activation functions. Subsequently, he studied their critical points, which caused plateaus and local minima in their learning processes. Thus, the uniqueness theorem is closely related to the learning process. In the present work, we first define three types of reducibility for feed-forward complex-valued neural networks with polar-represented activation functions and prove that we can easily transform reducible complex-valued neural networks into irreducible ones. We then prove the uniqueness theorem of complex-valued neural networks with polar-represented activation functions.
Masaki KOBAYASHI
University of Yamanashi
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Masaki KOBAYASHI, "Uniqueness Theorem of Complex-Valued Neural Networks with Polar-Represented Activation Function" in IEICE TRANSACTIONS on Fundamentals,
vol. E98-A, no. 9, pp. 1937-1943, September 2015, doi: 10.1587/transfun.E98.A.1937.
Abstract: Several models of feed-forward complex-valued neural networks have been proposed, and those with split and polar-represented activation functions have been mainly studied. Neural networks with split activation functions are relatively easy to analyze, but complex-valued neural networks with polar-represented functions have many applications but are difficult to analyze. In previous research, Nitta proved the uniqueness theorem of complex-valued neural networks with split activation functions. Subsequently, he studied their critical points, which caused plateaus and local minima in their learning processes. Thus, the uniqueness theorem is closely related to the learning process. In the present work, we first define three types of reducibility for feed-forward complex-valued neural networks with polar-represented activation functions and prove that we can easily transform reducible complex-valued neural networks into irreducible ones. We then prove the uniqueness theorem of complex-valued neural networks with polar-represented activation functions.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E98.A.1937/_p
Copy
@ARTICLE{e98-a_9_1937,
author={Masaki KOBAYASHI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Uniqueness Theorem of Complex-Valued Neural Networks with Polar-Represented Activation Function},
year={2015},
volume={E98-A},
number={9},
pages={1937-1943},
abstract={Several models of feed-forward complex-valued neural networks have been proposed, and those with split and polar-represented activation functions have been mainly studied. Neural networks with split activation functions are relatively easy to analyze, but complex-valued neural networks with polar-represented functions have many applications but are difficult to analyze. In previous research, Nitta proved the uniqueness theorem of complex-valued neural networks with split activation functions. Subsequently, he studied their critical points, which caused plateaus and local minima in their learning processes. Thus, the uniqueness theorem is closely related to the learning process. In the present work, we first define three types of reducibility for feed-forward complex-valued neural networks with polar-represented activation functions and prove that we can easily transform reducible complex-valued neural networks into irreducible ones. We then prove the uniqueness theorem of complex-valued neural networks with polar-represented activation functions.},
keywords={},
doi={10.1587/transfun.E98.A.1937},
ISSN={1745-1337},
month={September},}
Copy
TY - JOUR
TI - Uniqueness Theorem of Complex-Valued Neural Networks with Polar-Represented Activation Function
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1937
EP - 1943
AU - Masaki KOBAYASHI
PY - 2015
DO - 10.1587/transfun.E98.A.1937
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E98-A
IS - 9
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - September 2015
AB - Several models of feed-forward complex-valued neural networks have been proposed, and those with split and polar-represented activation functions have been mainly studied. Neural networks with split activation functions are relatively easy to analyze, but complex-valued neural networks with polar-represented functions have many applications but are difficult to analyze. In previous research, Nitta proved the uniqueness theorem of complex-valued neural networks with split activation functions. Subsequently, he studied their critical points, which caused plateaus and local minima in their learning processes. Thus, the uniqueness theorem is closely related to the learning process. In the present work, we first define three types of reducibility for feed-forward complex-valued neural networks with polar-represented activation functions and prove that we can easily transform reducible complex-valued neural networks into irreducible ones. We then prove the uniqueness theorem of complex-valued neural networks with polar-represented activation functions.
ER -