Abstract
In this paper, we study the effect of noise on a gradient system with forgetting. The noise include multiplicative noise, additive noise and chaotic noise. For multiplicative or additive noise, the noise is a mean zero Gaussian noise. It is added to the state vector of the system. For chaotic noise, it is added to the gradient vector. Let \({\mathbf x}\) be the state vector of a system, \(S_b\) be the variance of the Gaussian noise, \(\kappa '\) is average noise level of the chaotic noise, \(\lambda \) is a positive constant, \(V({\mathbf x})\) be the energy function of the original gradient system, \(V_{\otimes }({\mathbf x})\), \(V_{\oplus }({\mathbf x})\) and \(V_{\odot }({\mathbf x})\) be the energy functions of the gradient systems, if multiplicative, additive and chaotic noises are introduced. Suppose \(V({\mathbf x}) = F({\mathbf x}) + \lambda \Vert {\mathbf x}\Vert ^2_2\). It is shown that \(V_{\otimes }({\mathbf x}) = V({\mathbf x}) + (S_b/2) \sum _{j=1}^n (\partial ^2 F({\mathbf x})/\partial x_j^2) x_j^2 - S_b \sum _{j=1}^n \int x_j (\partial ^2 F({\mathbf x})/\partial x_j^2) dx_j\), \(V_{\oplus }({\mathbf x}) = V({\mathbf x}) + (S_b/2) \sum _{j=1}^n \partial ^2 F({\mathbf x})/\partial x_j^2\), and \(V_{\odot }({\mathbf x}) = V({\mathbf x}) + \kappa '\sum _{i=1}^n x_i\). The first two results imply that multiplicative or additive noise has no effect on the system if \(F({\mathbf x})\) is quadratic. While the third result implies that adding chaotic noise can have no effect on the system if \(\kappa '\) is zero. As many learning algorithms are developed based on the method of gradient descent, these results can be applied in analyzing the effect of noise on those algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
An, G.: The effects of adding noise during backpropagation training on a generalization performance. Neural Comput. 8, 643–674 (1996)
Asai, H., Onodera, K., Kamio, T., Ninomiya, H.: A study of Hopfield neural networks with external noises. In: Proceedings IEEE International Conference on Neural Networks, vol. 4, pp. 1584–1589 (1995)
Azamimi, A., Uwate, Y., Nishio, Y.: An analysis of chaotic noise injected to backpropagation algorithm in feedforward neural network. In: Proceedings of IWVCC08, pp. 70–73 (2008)
Azamimi, A., Uwate, Y., Nishio, Y.: Effect of chaos noise on the learning ability of back propagation algorithm in feed forward neural network. In: Proceedings of the 6th Internationa Colloquium on Signal Processing and Its Applications (CSPA) (2010)
Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Comput. 7, 108–116 (1995)
Bolt, G.: Fault tolerant in multi-layer Perceptrons. Ph.D. Thesis, University of York, UK (1992)
Grandvalet, Y., Canu, S.: A comment on noise injection into inputs in back-propagation learning. IEEE Trans. Syst. Man Cybern. 25, 678–681 (1995)
Grandvalet, Y., Canu, S., Boucheron, S.: Noise injection: theoretical prospects. Neural Comput. 9, 1093–1108 (1997)
Ho, K., Leung, C.S., Sum, J.: Convergence and objective functions of some fault/noise injection-based online learning algorithms for RBF networks. IEEE Trans. Neural Netw. 21(6), 938–947 (2010)
Ho, K., Leung, C.S., Sum, J.: Objective functions of the online weight noise injection training algorithms for MLP. IEEE Trans. Neural Netw. 22(2), 317–323 (2011)
Jim, K.C., Giles, C.L., Horne, B.G.: An analysis of noise in recurrent neural networks: convergence and generalization. IEEE Trans. Neural Netw. 7, 1424–1438 (1996)
Murray, A.F., Edwards, P.J.: Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements. IEEE Trans. Neural Netw. 4(4), 722–725 (1993)
Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Netw. 5(5), 792–802 (1994)
Reed, R., Marks II, R.J., Oh, S.: Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter. IEEE Trans. Neural Netw. 6(3), 529–538 (1995)
Sequin, C.H., Clay, R.D.: Fault tolerance in feedforward artificial neural networks. Neural Netw. 4, 111–141 (1991)
Sum, J., Leung, C.S., Ho, K.: Convergence analysis of on-line node fault injection-based training algorithms for MLP networks. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 211–222 (2012)
Sum, J., Leung, C.S., Ho, K.: Convergence analyses on on-line weight noise injection-based training algorithms for MLPs. IEEE Trans. Neural Netw. Learn. Syst. 23(11), 1827–1840 (2012)
Sum, John, Leung, Chi-sing, Ho, Kevin: Effect of input noise and output node stochastic on Wang’s kWTA. IEEE Trans. Neural Netw. Learn. Syst. 24(9), 1472–1478 (2013)
Wang, L.: Noise injection into inputs in sparsely connected Hopfield and winner-take-all neural networks. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 27(5), 868–870 (1997)
Zhang, H., Zhang, Y., Xu, D., Liu, X.: Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks, to appear in Cognitive Neurodynamic
Acknowledgments
The work presented in this paper is supported in part by research grants from Taiwan National Science Council numbering 100-2221-E-126-015 and 101-2221-E-126-016.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Su, C., Sum, J., Leung, CS., Ho, K.IJ. (2015). Noise on Gradient Systems with Forgetting. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9491. Springer, Cham. https://doi.org/10.1007/978-3-319-26555-1_54
Download citation
DOI: https://doi.org/10.1007/978-3-319-26555-1_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26554-4
Online ISBN: 978-3-319-26555-1
eBook Packages: Computer ScienceComputer Science (R0)