Non-Bayesian Learning in the Presence of Byzantine Agents

Conference paper

DOI: 10.1007/978-3-662-53426-7_30

Part of the Lecture Notes in Computer Science book series (LNCS, volume 9888)
Cite this paper as:
Su L., Vaidya N.H. (2016) Non-Bayesian Learning in the Presence of Byzantine Agents. In: Gavoille C., Ilcinkas D. (eds) Distributed Computing. DISC 2016. Lecture Notes in Computer Science, vol 9888. Springer, Berlin, Heidelberg


This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the Byzantine agents on the performance of consensus-based non-Bayesian learning. Our goal is to design an algorithm for the non-faulty agents to collaboratively learn the true state through local communication.

We propose an update rule wherein each agent updates its local beliefs as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the beliefs of its incoming neighbors and itself (using Byzantine consensus). Under mild assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically agree on the true state almost surely.


Distributed learning Byzantine agreement Fault-tolerance Adversary attacks Security 

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringUniversity of Illinois at Urbana-ChampaignChampaignUSA

Personalised recommendations