Skip to main content

Noise on Gradient Systems with Forgetting

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9491))

Included in the following conference series:

  • 2794 Accesses

Abstract

In this paper, we study the effect of noise on a gradient system with forgetting. The noise include multiplicative noise, additive noise and chaotic noise. For multiplicative or additive noise, the noise is a mean zero Gaussian noise. It is added to the state vector of the system. For chaotic noise, it is added to the gradient vector. Let \({\mathbf x}\) be the state vector of a system, \(S_b\) be the variance of the Gaussian noise, \(\kappa '\) is average noise level of the chaotic noise, \(\lambda \) is a positive constant, \(V({\mathbf x})\) be the energy function of the original gradient system, \(V_{\otimes }({\mathbf x})\), \(V_{\oplus }({\mathbf x})\) and \(V_{\odot }({\mathbf x})\) be the energy functions of the gradient systems, if multiplicative, additive and chaotic noises are introduced. Suppose \(V({\mathbf x}) = F({\mathbf x}) + \lambda \Vert {\mathbf x}\Vert ^2_2\). It is shown that \(V_{\otimes }({\mathbf x}) = V({\mathbf x}) + (S_b/2) \sum _{j=1}^n (\partial ^2 F({\mathbf x})/\partial x_j^2) x_j^2 - S_b \sum _{j=1}^n \int x_j (\partial ^2 F({\mathbf x})/\partial x_j^2) dx_j\), \(V_{\oplus }({\mathbf x}) = V({\mathbf x}) + (S_b/2) \sum _{j=1}^n \partial ^2 F({\mathbf x})/\partial x_j^2\), and \(V_{\odot }({\mathbf x}) = V({\mathbf x}) + \kappa '\sum _{i=1}^n x_i\). The first two results imply that multiplicative or additive noise has no effect on the system if \(F({\mathbf x})\) is quadratic. While the third result implies that adding chaotic noise can have no effect on the system if \(\kappa '\) is zero. As many learning algorithms are developed based on the method of gradient descent, these results can be applied in analyzing the effect of noise on those algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. An, G.: The effects of adding noise during backpropagation training on a generalization performance. Neural Comput. 8, 643–674 (1996)

    Article  Google Scholar 

  2. Asai, H., Onodera, K., Kamio, T., Ninomiya, H.: A study of Hopfield neural networks with external noises. In: Proceedings IEEE International Conference on Neural Networks, vol. 4, pp. 1584–1589 (1995)

    Google Scholar 

  3. Azamimi, A., Uwate, Y., Nishio, Y.: An analysis of chaotic noise injected to backpropagation algorithm in feedforward neural network. In: Proceedings of IWVCC08, pp. 70–73 (2008)

    Google Scholar 

  4. Azamimi, A., Uwate, Y., Nishio, Y.: Effect of chaos noise on the learning ability of back propagation algorithm in feed forward neural network. In: Proceedings of the 6th Internationa Colloquium on Signal Processing and Its Applications (CSPA) (2010)

    Google Scholar 

  5. Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Comput. 7, 108–116 (1995)

    Article  Google Scholar 

  6. Bolt, G.: Fault tolerant in multi-layer Perceptrons. Ph.D. Thesis, University of York, UK (1992)

    Google Scholar 

  7. Grandvalet, Y., Canu, S.: A comment on noise injection into inputs in back-propagation learning. IEEE Trans. Syst. Man Cybern. 25, 678–681 (1995)

    Article  Google Scholar 

  8. Grandvalet, Y., Canu, S., Boucheron, S.: Noise injection: theoretical prospects. Neural Comput. 9, 1093–1108 (1997)

    Article  Google Scholar 

  9. Ho, K., Leung, C.S., Sum, J.: Convergence and objective functions of some fault/noise injection-based online learning algorithms for RBF networks. IEEE Trans. Neural Netw. 21(6), 938–947 (2010)

    Article  Google Scholar 

  10. Ho, K., Leung, C.S., Sum, J.: Objective functions of the online weight noise injection training algorithms for MLP. IEEE Trans. Neural Netw. 22(2), 317–323 (2011)

    Article  Google Scholar 

  11. Jim, K.C., Giles, C.L., Horne, B.G.: An analysis of noise in recurrent neural networks: convergence and generalization. IEEE Trans. Neural Netw. 7, 1424–1438 (1996)

    Article  Google Scholar 

  12. Murray, A.F., Edwards, P.J.: Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements. IEEE Trans. Neural Netw. 4(4), 722–725 (1993)

    Article  Google Scholar 

  13. Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Netw. 5(5), 792–802 (1994)

    Article  Google Scholar 

  14. Reed, R., Marks II, R.J., Oh, S.: Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter. IEEE Trans. Neural Netw. 6(3), 529–538 (1995)

    Article  Google Scholar 

  15. Sequin, C.H., Clay, R.D.: Fault tolerance in feedforward artificial neural networks. Neural Netw. 4, 111–141 (1991)

    Google Scholar 

  16. Sum, J., Leung, C.S., Ho, K.: Convergence analysis of on-line node fault injection-based training algorithms for MLP networks. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 211–222 (2012)

    Article  Google Scholar 

  17. Sum, J., Leung, C.S., Ho, K.: Convergence analyses on on-line weight noise injection-based training algorithms for MLPs. IEEE Trans. Neural Netw. Learn. Syst. 23(11), 1827–1840 (2012)

    Article  Google Scholar 

  18. Sum, John, Leung, Chi-sing, Ho, Kevin: Effect of input noise and output node stochastic on Wang’s kWTA. IEEE Trans. Neural Netw. Learn. Syst. 24(9), 1472–1478 (2013)

    Article  Google Scholar 

  19. Wang, L.: Noise injection into inputs in sparsely connected Hopfield and winner-take-all neural networks. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 27(5), 868–870 (1997)

    Article  Google Scholar 

  20. Zhang, H., Zhang, Y., Xu, D., Liu, X.: Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks, to appear in Cognitive Neurodynamic

    Google Scholar 

Download references

Acknowledgments

The work presented in this paper is supported in part by research grants from Taiwan National Science Council numbering 100-2221-E-126-015 and 101-2221-E-126-016.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Su, C., Sum, J., Leung, CS., Ho, K.IJ. (2015). Noise on Gradient Systems with Forgetting. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9491. Springer, Cham. https://doi.org/10.1007/978-3-319-26555-1_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26555-1_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26554-4

  • Online ISBN: 978-3-319-26555-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics