Robust Inference and Local Algorithms

  • Yishay MansourEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9234)


We introduce a new feature to inference and learning which we call robustness. By robustness we intuitively model the case that the observation of the learner might be corrupted. We survey a new and novel approach to model such possible corruption as a zero-sum game between an adversary that selects the corruption and a leaner that predict the correct label. The corruption of the observations is done in a worse-case setting, by an adversary, where the main restriction is that the adversary is limited to use one of a fixed know class of modification functions. The main focus in this line of research is on efficient algorithms both for the inference setting and for the learning setting. In order to be efficient in the dimension of the domain, one cannot hope to inspect all the possible inputs. For this, we have to invoke local computation algorithms, that inspect only a logarithmic fraction of the domain per query.


  1. 1.
    Alon, N., Rubinfeld, R., Vardi, S., Xie, N.: Space-efficient local computation algorithms. In: SODA, pp. 1132–1139 (2012)Google Scholar
  2. 2.
    Angluin, D., Laird, P.: Learning from noisy examples. Mach. Learn. 2(4), 343–370 (1988)Google Scholar
  3. 3.
    Ben-Or, M., Linial, N.: Collective coin flipping, robust voting schemes and minima of banzhaf values. In: FOCS, pp. 408–416 (1985)Google Scholar
  4. 4.
    Ben-Tal, A., El Ghaoui, L., Nemirovski, A.S.: Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton (2009)CrossRefzbMATHGoogle Scholar
  5. 5.
    Even, G., Medina, M., Ron, D.: Best of two local models: local centralized and local distributed algorithms. In: CoRR, abs/1402.3796 (2014)Google Scholar
  6. 6.
    Feige, U., Mansour, Y., Schapire, R.: Learning and inference in the presence of corrupted inputs. In: COLT (2015)Google Scholar
  7. 7.
    Kahn, J., Kalai, G., Linial, N.: The influence of variables on boolean functions. In: FOCS, pp. 68–80 (1988)Google Scholar
  8. 8.
    Kearns, M.J., Li, M.: Learning in the presence of malicious errors. SIAM J. Comput. 22(4), 807–837 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    MacKay, D.J.C.: Information Theory. Inference and Learning Algorithms. Cambridge University Press, New York (2002)Google Scholar
  10. 10.
    Mansour, Y., Rubinstein, A., Tennenholtz, M.: Robust probabilistic inference. In: Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4–6, 2015, pp. 449–460 (2015)Google Scholar
  11. 11.
    Mansour, Y., Rubinstein, A., Vardi, S., Xie, N.: Converting online algorithms to local computation algorithms. ICALP 1, 653–664 (2012)MathSciNetGoogle Scholar
  12. 12.
    Mansour, Y., Vardi, S.: A local computation approximation scheme to maximum matching. In: APPROX-RANDOM, pp. 260–273 (2013)Google Scholar
  13. 13.
    Mossel, E., O’Donnell, R., Oleszkiewicz, K.: Noise stability of functions with low influences invariance and optimality. In: FOCS, pp. 21–30 (2005)Google Scholar
  14. 14.
    Pearl, J.: Causality: Models, Reasoning, and Inference. Cambridge University Press, New York (2000)Google Scholar
  15. 15.
    Rubinfeld, R., Tamir, G., Vardi, S., Xie, N.: Fast local computation algorithms. In: ICS, pp. 223–238 (2011)Google Scholar
  16. 16.
    Valiant, L.G.: Learning disjunction of conjunctions. In: Proceedings of the 9th International Joint Conference on Artificial Intelligence - Volume 1, IJCAI 1985, pp. 560–566 (1985)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Microsoft ResearchHertzelia and Tel-Aviv UniversityHertzeliaIsrael

Personalised recommendations