Newton-Based Simultaneous Perturbation Stochastic Approximation

  • S. BhatnagarEmail author
  • H. Prasad
  • L. Prashanth
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 434)


In this chapter, we present four different Newton SPSA algorithms for the long-run average cost objective. The random perturbation technique requiring zero-mean, bounded, symmetric perturbation random variables having a common distribution and mutually independent of one another is used to derive the various Hessian estimates. These algorithms require four, three, two and one simulations, respectively, and are seen to be efficient in practice. Note that, though we discuss Newton SPSA algorithms only for the long-run average cost setting here, all the Hessian estimation schemes discussed below can also be used for the expected cost setting as well.


Convergence Analysis Stochastic Approximation Common Distribution Random Early Detection Hessian Estimate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1999)zbMATHGoogle Scholar
  2. 2.
    Bhatnagar, S.: Adaptive multivariate three-timescale stochastic approximation algorithms for simulation based optimization. ACM Transactions on Modeling and Computer Simulation 15(1), 74–107 (2005)CrossRefGoogle Scholar
  3. 3.
    Borkar, V.S.: Stochastic Approximation: A Dynamical Systems Viewpoint. Cambridge University Press and Hindustan Book Agency (Jointly Published), Cambridge and New Delhi (2008)Google Scholar
  4. 4.
    Fabian, V.: Stochastic approximation. In: Rustagi, J.J. (ed.) Optimizing Methods in Statistics, pp. 439–470. Academic Press, New York (1971)Google Scholar
  5. 5.
    Kushner, H.J., Clark, D.S.: Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer, New York (1978)CrossRefGoogle Scholar
  6. 6.
    Kushner, H.J., Yin, G.G.: Stochastic Approximation Algorithms and Applications. Springer, New York (1997)zbMATHGoogle Scholar
  7. 7.
    Lasalle, J.P., Lefschetz, S.: Stability by Liapunov’s Direct Method with Applications. Academic Press, New York (1961)Google Scholar
  8. 8.
    Patro, R.K., Bhatnagar, S.: A probabilistic constrained nonlinear optimization framework to optimize RED parameters. Performance Evaluation 66(2), 81–104 (2009)CrossRefGoogle Scholar
  9. 9.
    Ruppert, D.: A Newton-Raphson version of the multivariate Robbins-Monro procedure. Annals of Statistics 13, 236–245 (1985)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Spall, J.C.: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Auto. Cont. 37(3), 332–341 (1992)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Spall, J.C.: Adaptive stochastic approximation by the simultaneous perturbation method. IEEE Trans. Autom. Contr. 45, 1839–1853 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Spall, J.C.: Feedback and weighting mechanisms for improving Jacobian estimates in the adaptive simultaneous perturbation algorithm. IEEE Transactions on Automatic Control 54(6), 1216–1229 (2009)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Zhu, X., Spall, J.C.: A modified second-order SPSA optimization algorithm for finite samples. Int. J. Adapt. Control Signal Process. 16, 397–409 (2002)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Department of Computer Science and AutomationIndian Institute of ScienceBangaloreIndia

Personalised recommendations