The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] log-likelihood cost function(1hit)

1-1hit
  • Comparison of Convergence Behavior and Generalization Ability in Backpropagation Learning with Linear and Sigmoid Output Units

    Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E76-A No:6
      Page(s):
    1035-1042

    The most commonly used activation function in Backpropagation learning is sigmoidal while linear function is also sometimes used at the output layer with the view that choice between these activation functions does not make considerable differences in network's performance. In this letter, we show distinct performance between a network with linear output units and a similar network with sigmoid output units in terms of convergence behavior and generalization ability. We experimented with two types of cost functions, namely, sum-squared error used in standard Backpropagation and log-likelihood recently reported. We find that, with sum-squared error cost function and hidden units with nonsteep sigmoid function, use of linear units at the output layer instead of sigmoidal ones accelerates the convergence speed considerably while generalization ability is slightly degraded. Network with sigmoid output units trained by log-likelihood cost function yields even faster convergence and better generalization but does not converge at all with linear output units. It is also shown that a network with linear output units needs more hidden units for convergence.