Skip to main content
Log in

Reflecting on Social Influence in Networks

  • Published:
Journal of Logic, Language and Information Aims and scope Submit manuscript

Abstract

In many social contexts, social influence seems to be inescapable: the behavior of others influences us to modify ours, and vice-versa. However, social psychology is full of examples of phenomena where individuals experience a discrepancy between their public behavior and their private opinion. This raises two central questions. First, how does an individual reason about the behavior of others and their private opinions in situations of social influence? And second, what are the laws of the resulting information dynamics? In this paper, we address these questions by introducing a formal framework for representing reasoning about an individual’s private opinions and public behavior under the dynamics of social influence in social networks. Moreover, we dig deeper into the involved information dynamics by modeling how individuals can learn about each other based on this reasoning. This compels us to introduce a new formal notion of reflective social influence. Finally, we initialize the work on proof theory and automated reasoning for our framework by introducing a sound and complete tableaux system for a fragment of our logic. Furthermore, this constitutes the first tableau system for the “Facebook logic” of J. Seligman, F. Liu, and P. Girard.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Models for diffusion of innovations (Granovetter 1978) or of creation of micro-cultures Axelrod (1997) are two among many important examples of the power of social network models in explaining complex social phenomena.

  2. In line with the tradition in modal and epistemic logic we may also refer to such states as (possible) worlds.

  3. In other words, we will define knowledge and uncertainty in the traditional S5 way of epistemic logic.

  4. To avoid circular definitions, the set \(\mathcal {L}\) should be forbidden to have precondition formulas involving \([\mathcal {L}]\) itself. Nevertheless, we can allow formulas of \(\mathcal {L}_\mathcal {KDL}\) in \(\mathcal {L}\) constructed at an “earlier stage” in a simultaneous inductive definition of learning modalities and the language \(\mathcal {L}_\mathcal {KDL}\).

    Similar restrictions will be imposed in the definition of \(\mathsf {DT}\) below in Definition 7.

  5. We use the standard abbreviations for \(\vee , \rightarrow \), and \(\leftrightarrow \). Moreover we will denote the dual of F by \(\langle F \rangle \) and the dual of K by \(\langle K \rangle \), in other words \(\langle F \rangle \varphi := \lnot F \lnot \varphi \) and \(\langle K \rangle \varphi := \lnot K\lnot \varphi \).

  6. Similarly as for modalities \([\mathcal {L}]\) in Definition 4, the dynamic transformation \(\mathcal {D} = (\varPhi , \mathsf {post})\) should not contain precondition formulas in \(\varPhi \) involving \([\mathcal {D}]\) itself, but can contain formulas of \(\mathcal {L}_\mathcal {KDL}\) constructed on an “earlier stage” in a simultaneous inductive definition of dynamic transformations and the language \(\mathcal {L}_\mathcal {KDL}\). In other words, one should view Definitions 7 and 4 as one simultaneous recursive definition.

  7. Note that our framework does not rely on the assumption that the network structure is common knowledge.

  8. Modulo a translation of standard atomic propositions into our feature propositions for a countable set of properties taking only two values.

  9. It should be mentioned, however, that Sano (2014) has very recently independently developed a labeled sequent system and an axiomatization for the Facebook logic. The labeled sequent system of Sano have not yet been fully published, thus it remains difficult to give a proper comparison with our tableau system. Still, the approaches seem to differ considerably. However, as noted by Fitting (2012) for modal logic, there is a correspondence between nested sequent systems and prefixed tableaux. Nevertheless, we leave this non-trivial comparison with Sano’s system for future work.

  10. The symbols “\(\asymp \)” and “\(\sim \)” were also used to represent the network and epistemic accessibility in our semantics, but here we reuse them for accessibility formulas. Since the accessibility formulas are intended to specify the structure of the model constructed in the completeness proof this reuse seems natural.

References

  • Abelson, R. P. (1964). Mathematical models of the distribution of attitudes under controversy. In N. Frederiksen, & H. Gulliksen (Eds.), Contributions to mathematical psychology (pp. 142–160). New York, NY: Holt, Reinhart & Winston.

  • Acemoglu, D., & Ozdaglar, A. (2011). Opinion dynamics and learning in social networks. Dynamic Games and Applications, 1(1), 3–49. doi:10.1007/s13235-010-0004-1.

    Article  Google Scholar 

  • Andersen, H. C. (1837). Kejserens nye klæder (the emperors new clothes). In Eventyr, fortalte for Børn. Tredie Hefte (Fairy Tales Told for Children. Third Collection), C. A. Reitzel, Copenhagen.

  • Areces, C., & ten Cate, B. (2007). Hybrid logics. In P. Blackburn, J. van Benthem, & F. Wolter (Eds.), Handbook of modal logic (pp. 821–868). Amsterdam: Elsevier.

    Chapter  Google Scholar 

  • Aucher, G., Balbiani, P., del Cerro, L. F., & Herzig, A. (2009). Global and local graph modifiers. Electronic Notes in Theoretical Computer Science 231, 293–307. doi:10.1016/j.entcs.2009.02.042. Proceedings of the 5th workshop on methods for modalities (M4M5 2007)

  • Axelrod, R. (1997). Complexity of cooperation: Agent-based models of competition and collaboration. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Balbiani, P., Ditmarsch, H. V., Herzig, A., & de Lima, T. (2010). Tableaux for public announcement logic. Journal of Logic and Computation, 20(1), 55–76. doi:10.1093/logcom/exn060.

    Article  Google Scholar 

  • Baltag, A., Christoff, Z., Hansen, J. U., & Smets, S. (2013). Logical models of informational cascades. In J. van Benthem & F. Lui (Eds.), Logic across the university: Foundations and applications, studies in logic (pp. 405–432). London: College Publications.

    Google Scholar 

  • Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992–1026.

    Article  Google Scholar 

  • Bikhchandani, S., Hirshleifer, D., & Welch, I. (1998). Learning from the behavior of others: Conformity, fads, and informational cascades. Journal of Economic Perspectives, 12(3), 151–170.

    Article  Google Scholar 

  • Bolander, T., & Blackburn, P. (2007). Termination for hybrid tableaus. Journal of Logic and Computation, 17(3), 517–554.

    Article  Google Scholar 

  • Centola, D., Willer, R., & Macy, M. (2005). The Emperor’s dilemma. A computational model of self-enforcing norms. American Journal of Sociology, 110(4), 1009–1040.

    Article  Google Scholar 

  • Christoff, Z., & Hansen, J. U. (2013). A two-tiered formalization of social influence. In H. Huang, D. Grossi, & O. Roy (Eds.), Logic, rationality and interaction, proceedings of the fourth international workshop (LORI 2013), Lecture notes in computer science (Vol. 8196, pp. 68–81). Springer.

  • Christoff, Z., & Hansen, J. U. (2015). A logic for diffusion in social networks. Journal of Applied Logic, 13(1), 48–77. doi:10.1016/j.jal.2014.11.011.

    Article  Google Scholar 

  • DeGroot, M. H. (1974). Reaching a consensus. Journal of the American Statistical Association, 69(345), 118–121.

    Article  Google Scholar 

  • Easley, D., & Kleinberg, J. (2010). Networks, crowds, and markets: Reasoning about a highly connected world. New York, NY: Cambridge University Press.

    Book  Google Scholar 

  • Fitting, M. (2012). Prefixed tableaus and nested sequents. Annals of Pure and Applied Logic, 163(3), 291–313. doi:10.1016/j.apal.2011.09.004.

    Article  Google Scholar 

  • Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83, 1420–1443.

    Article  Google Scholar 

  • Halbesleben, J. R. B., Wheeler, A. R., & Buckley, M. R. (2007). Understanding pluralistic ignorance: Application and theory. Journal of Managerial Psychology, 22(1), 65–83.

    Article  Google Scholar 

  • Jackson, M. O. (2010). Social and economic networks. Princeton: Princeton University Press.

    Google Scholar 

  • Kennedy, J., & Eberhart, R. C. (2001). Swarm intelligence. San Francisco: Morgan Kauffmann.

    Google Scholar 

  • Kooi, B., & Renne, B. (2011). Arrow update logic. The Review of Symbolic Logic, 4, 536–559. doi:10.1017/S1755020311000189.

    Article  Google Scholar 

  • Latané, B., & Darley, J. M. (1969). Bystander “Apathy”. American Scientist, 57(2), 244–268. http://www.jstor.org/stable/27828530.

  • Lehrer, K. (1976). When rational disagreement is impossible. Noûs, 10(3), 327–332.

    Article  Google Scholar 

  • Liu, F., Seligman, J., & Girard, P. (2014). Logical dynamics of belief change in the community. Synthese. doi:10.1007/s11229-014-0432-3.

  • Mason, W. A., Conrey, F., & Smith, E. R. (2007). Situating social influence processes: Multidirectional flows of influence within social networks. Personality and Social Psychology Review, 11, 279–300.

    Article  Google Scholar 

  • O’Gorman, H. J. (1986). The discovery of pluralistic ignorance: An ironic lesson. Journal of the History of the Behavioral Sciences, 22, 333–347.

    Article  Google Scholar 

  • Prentice, D. A., & Miller, D. T. (1993). Pluralistic ignorance and alcohol use on campus: Some consequences of misperceiving the social norm. Journal of Personality and Social Psychology, 64(2), 243–256.

    Article  Google Scholar 

  • Rendsvig, R. K. (2014). Pluralistic ignorance in the bystander effect: Informational dynamics of unresponsive witnesses in situations calling for intervention. Synthese, 191(11), 2471–2498.

    Article  Google Scholar 

  • Sano, K. (2014). Axiomatizing epistemic logic of friendship via tree-sequent calculus. Paper at Kanazawa workshop for epistemic logic and its dynamic extensions, Kanazawa, Japan 2014. http://www.jaist.ac.jp/~v-sano/jw2014/index.html.

  • Seligman, J., Liu, F., & Girard, P. (2011). Logic in the community. In M. Banerjee, & A. Seth (Eds.), Logic and its applications, lecture notes in computer science (Vol. 6521, pp. 178–188). Berlin/Heidelberg: Springer. doi:10.1007/978-3-642-18026-2_15.

  • Seligman, J., Liu, F., & Girard, P. (2013). Knowledge, friendship and social announcements. In J. van Benthem & F. Liu (Eds.), Logic across the university: Foundations and applications: Proceedings of the Tsinghua logic conference, Beijing, 2013. College Publications.

  • van Benthem, J., & Liu, F. (2007). Dynamic logic of preference upgrade. Journal of Applied Non-Classical Logics, 17(2), 157–182.

    Article  Google Scholar 

  • van Ditmarsch, H., van der Hoek, W., & Kooi, B. (2008). Dynamic epistemic logic. Syntese library (Vol. 337). The Netherlands: Springer.

    Google Scholar 

  • Westphal, J. D., & Bednar, M. K. (2005). Pluralistic ignorance in corporate boards and firms’ strategic persistence in response to low firm performance. Administrative Science Quarterly, 50(2), 262–298.

    Google Scholar 

  • Zhen, L., & Seligman, J. (2011). A logical model of the dynamics of peer pressure. Electronic Notes in Theoretical Computer Science, 278(0), 275–288. doi:10.1016/j.entcs.2011.10.021, Proceedings of the 7th workshop on methods for modalities (M4M 2011) and the 4th workshop on logical aspects of multi-agent systems (LAMAS 2011).

Download references

Acknowledgments

Zoé Christoff acknowledges support for this research from EPSRC (Grant EP/M015815/1, “Foundations of Opinion Formation in Autonomous Systems”), as well as from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant Agreement No. 283963. Carlo Proietti is sponsored by the Swedish Research Council (VR) through the project “Logical modelling of collective attitudes and their dynamics”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jens Ulrik Hansen.

Additional information

Jens Ulrik Hansen—Independent researcher.

Appendices

Appendix 1: The “Simple Influence” Rules of Christoff and Hansen (2013) and the New “Reflexive Influence”

Using the notation of Sect. 4, the table below describes case by case the “simple influence” operator of Christoff and Hansen (2013) as given in Definition 1 and the corresponding dynamic transformation (\(\mathcal {I}\)) defined in Definition 11 .

The “Inner state” column represents an agent’s possible private state of opinion, while the three subsequent columns provide all of the eight possible combinations of the expressed opinions of her friends (where 1 stands for true while 0 stands for false), e.g. for the rows 4 to 6 an agent has some friend expressing a pro opinion, some friend expressing a contra opinion but no friend expressing a neutral opinion. The “Update” column represents how the agent’s expressed opinion is updated at the next step, e.g. row 1 should be interpreted as follows: if an agent has a private pro opinion and has some friends expressing a pro opinion, some expressing a contra opinion and some being neutral then she will express a pro opinion at the next step.

figure a

Below is a similar description of the new “Reflexive influence” operator as defined in Definition 2 and the corresponding dynamic transformation (\(\mathcal {R}\)) in Definition 12. “–” means that any value may be inserted here.

figure b

Appendix 2: A Tableau System for \(\mathsf {KL}\)

In Sect. 1, we prove soundness and completeness of the tableau system for the logic \(\mathsf {KL}\) given in Sect. 4.3. Then, in Sect. 1, we show how any formula of \(\mathsf {KIO}\) can be translated into a semantically equivalent formula of \(\mathsf {KL}\). This in turn gives us a sound and complete procedure for proving validities of \(\mathsf {KIO}\). In other words, given a valid formula of \(\mathsf {KIO}\) the translation and the tableau system can be turn into a procedure for showing that this formula is indeed valid. Note however, as we will not show termination of our tableau system, we do not have a procedure for deciding whether a given formula of \(\mathsf {KIO}\) is in fact a validity. (We do have semi-decidability as we can search all the countably enumerable tableau proofs, though.) Whether our tableau system is terminating, and if not, the quest of finding a terminating one, we leave for future research.

1.1 Soundness and Completeness of the Tableau System for \(\mathsf {KL}\)

Regarding soundness, we need to show that if there is a tableau proof for \(\varphi \) then \(\varphi \) is valid. We show this by showing the contrapositive, that is, if \(\lnot \varphi \) is satisfiable then there cannot be a tableau proof for \(\varphi \). This again amounts to showing that whenever a tableau has a branch for which all the formulas occurring on it are satisfiable in one model, then applying any of the rules will create at least one new branch for which all the formulas occurring on it are satisfiable in one model. A simple inspection of all the rules shows that they in fact satisfy this property (note that neither of the rules \((\textit{close1}), (\textit{close2})\), or \((F\textit{-irrefl})\) can be applied to a branch for which all the formulas occurring on it are satisfiable in one model.)

Now, for completeness, we first need some definitions and lemmas:

Definition 13

(Saturation) A branch of a tableau is called saturated if no more rules can be applied to it. A tableau is called saturated if all its branches are saturated.

Lemma 1

Any finite tableau can always be extended to a saturated tableau.

The proof of Lemma 1 is straightforward.

Definition 14

(\(\varTheta \)-equivalence, \(\approx _{\varTheta }\)) Let \(\alpha \) and \(\beta \) be two individual prefixes occurring on a branch \(\varTheta \). We say that \(\alpha \) is \(\varTheta \)-equivalent to \(\beta \), in symbols \(\alpha \approx _{\varTheta }\beta \) if \(\alpha =\beta \) or there exists a nominal i and a world prefix \(\sigma \) such that \(\emptyset .\sigma .\alpha i \in \varTheta \) and \(\emptyset .\sigma .\beta i \in \varTheta \). We denote by \([\alpha ]_{\varTheta }\) the \(\approx _{\varTheta }\)-equivalence class of \(\alpha \).

Lemma 2

\((\varTheta \)-equivalence Lemma) If \(\varTheta \) is a saturated branch then \(\approx _{\varTheta }\) is an equivalence relation.

Proof

Transitivity is the only non-trivial case. Suppose that \(\alpha \approx _{\varTheta }\beta \) and \(\beta \approx _{\varTheta }\gamma \), but \(\alpha \ne \beta , \beta \ne \gamma \) and \(\alpha \ne \gamma \) (otherwise the case is straightforward). By definition, we have \(\emptyset .\sigma .\alpha i, \emptyset .\sigma .\beta i, \emptyset .\tau .\beta j, \emptyset .\tau .\gamma j \in \varTheta \) for some world prefixes \(\sigma , \tau \) and nominals ij. From the (Ki) rule and saturation it follows that \(\emptyset .\tau .\alpha i \in \varTheta \) and \(\emptyset .\tau .\beta i \in \varTheta \). From the latter we also derive \(\emptyset .\tau .\gamma i \in \varTheta \) by the (Id) rule. Now \(\alpha \approx _{\varTheta } \gamma \) follows from \(\emptyset .\tau .\alpha i \in \varTheta \) and \(\emptyset .\tau .\gamma i \in \varTheta \). \(\square \)

Given an open saturated branch we can now construct a canonical model.

Definition 15

(The canonical model \(\mathcal {M}^\varTheta \)) Given an open saturated branch \(\varTheta \), let the canonical model \(\mathcal {M}^\varTheta = \langle W^\varTheta , A^\varTheta , (\asymp _w^\varTheta )_{w \in W}, (\sim _a^\varTheta )_{a \in A}, g^\varTheta , V^\varTheta )\) be defined by:

where \(a_0\) is just some fixed element of \(A^\varTheta \).

This canonical model is well-defined as justified by the following lemmas:

Lemma 3

\(\asymp ^{\varTheta }_{\sigma }\) is well-defined, irreflexive and symmetric.

Proof

That \(\asymp ^{\varTheta }_{\sigma }\) is well-defined follows from the rules (Ki), (LRF), and (RRF). Irreflexivity follows using the rule \((F\textit{-irrefl})\) and symmetry using the rule \((F\textit{-sym})\). \(\square \)

Lemma 4

\(\sim ^{\varTheta }_{[\alpha ]}\) is well-defined and an equivalence relation.

Proof

That \(\sim ^{\varTheta }_{[\alpha ]}\) is well-defined follows from the rules (Ki) and (RK). Reflexivity, symmetry, and transitivity follows by closure under the rules \((K\textit{-refl}), (K\textit{-sym})\), and \((K\textit{-trans})\). \(\square \)

Lemma 5

\(g^{\varTheta }\) is well-defined.

Proof

This is straightforward using (Ki). \(\square \)

Lemma 6

\(V^{\varTheta }\) is well-defined.

Proof

The proof of this lemma uses closure under the rules \((Ki), (Id), (\textit{close2})\), and \((\textit{prop.cut})\) and is left to the reader. \(\square \)

Before we can prove completeness we need a few more lemmas.

Lemma 7

For all models \(\mathcal {M} = \langle W, A (\asymp _w)_{w \in W}, (\sim _a)_{a \in A}, g, V\rangle \) and all learning modalities \(\mathcal {L}_1\) and \(\mathcal {L}_2, (\mathcal {M}^{\mathcal {L}_1})^{\mathcal {L}_2} = \mathcal {M}^{\mathcal {L}_1 \cup \mathcal {L}_2}\).

Proof

Let \(\sim _a^1\) denote the uncertainty relation for a in \(\mathcal {M}^{\mathcal {L}_1}, \sim _a^2\) the uncertainty relation for a in \((\mathcal {M}^{\mathcal {L}_1})^{\mathcal {L}_2}\), and \(\sim _a^3\) the uncertainty relation for a in \(\mathcal {M}^{\mathcal {L}_1 \cup \mathcal {L}_2}\). Since learning updating only changes the uncertainty relations, we only need to show that \(\sim _a^2 = \sim _a^3\) to prove that \((\mathcal {M}^{\mathcal {L}_1})^{\mathcal {L}_2} = \mathcal {M}^{\mathcal {L}_1 \cup \mathcal {L}_2}\).

Now, by definition, \(w \sim _a^2 v\) is equivalent to:

$$\begin{aligned}&w \sim _a^1 v \textit{ and, there is no agent } b \textit{ such that } a \asymp _w b \textit{ and;}\nonumber \\&\mathcal {M}^{\mathcal {L}_1}, w, b \models \varphi \textit{ and } \mathcal {M}^{\mathcal {L}_1}, v, b \models \lnot \varphi , \textit{ or } \nonumber \\&\mathcal {M}^{\mathcal {L}_1}, w, b \models \lnot \varphi \textit{ and } \mathcal {M}^{\mathcal {L}_1}, v, b \models \varphi , \textit{ for a } \varphi \in \mathcal {L}_2. \end{aligned}$$
(2)

Since \(\mathcal {L}_2 \subseteq \mathcal {L^-}\), as it can easily be shown, \(\mathcal {M}^{\mathcal {L}_1}, v, b \models \varphi \) (or \(\lnot \varphi \)) iff \(\mathcal {M}, v, b \models \varphi \) (or \(\lnot \varphi \)), for all \(\varphi \in \mathcal {L}_2\). But then, (2) is equivalent to:

$$\begin{aligned}&w \sim _a v \textit{ and, there is no agent } b \textit{ such that } a \asymp _w b \textit{ and;}\nonumber \\&\mathcal {M}, w, b \models \varphi \textit{ and } \mathcal {M}, v, b \models \lnot \varphi , \textit{ or } \nonumber \\&\mathcal {M}, w, b \models \lnot \varphi \textit{ and } \mathcal {M}, v, b \models \varphi , \textit{ for a } \varphi \in \mathcal {L}_1 \cup \mathcal {L}_2. \end{aligned}$$
(3)

Now, (3) is clearly equivalent to \(w \sim _a^3 v\) and the proof is complete. \(\square \)

Lemma 8

The following formulas are all valid in \(\mathsf {KL}\):

  1. (i)

    \([\mathcal {L}] q \leftrightarrow q \ , \ \ \) for \(q \in \mathsf {FP}\) or \(q \in \mathsf {NOM}\)

  2. (ii)

    \([\mathcal {L}](\varphi \wedge \psi ) \leftrightarrow [\mathcal {L}]\varphi \wedge [\mathcal {L}]\psi \)

  3. (iii)

    \([\mathcal {L}]\lnot \varphi \leftrightarrow \lnot [\mathcal {L}]\varphi \)

  4. (iv)

    \([\mathcal {L}]F \varphi \leftrightarrow F [\mathcal {L}]\varphi \)

  5. (v)

    \([\mathcal {L}]@_i \varphi \leftrightarrow @_i [\mathcal {L}]\varphi \)

Proof

For the case (iv), we have the following equivalences:

For the other cases, the proofs are similar. \(\square \)

An essential lemma in the completeness proof for the cases of the K modality is Lemma 10 below. However, to prove this key lemma we already need to prove some part of the completeness, namely a truth lemma for all \(\mathcal {L^-}\) formulas:

Lemma 9

(Truth lemma for \(\mathcal {L^-}\)) Let \(\varTheta \) be an open saturated branch of a tableau, \(\varphi \) a formula of \(\mathcal {L^-}\), and X a finite set of \(\mathcal {L^-}\)-formulas. If \(X.\sigma .\alpha \varphi \in \varTheta \), then \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models \varphi \).

Proof

The proof goes by induction on the complexity of \(\varphi \).

The base cases. For \(\varphi =q\) or \(\varphi = \lnot q\), for a \(q \in \mathsf {FP}\) or \(q \in \mathsf {NOM}, X.\sigma .\alpha \varphi \in \varTheta \) implies that \(\emptyset .\sigma .\alpha \varphi \in \varTheta \) by the \((\emptyset )\) and \((\lnot \emptyset )\) rules. But then, by the \((\textit{close1})\) rule and the definitions of \(g^\varTheta \) and \(V^\varTheta \) it straightforwardly follows that \(\mathcal {M}^\varTheta , \sigma , [\alpha ] \models \varphi \). This, however, is equivalent to \(\mathcal {M}^\varTheta , \sigma , [\alpha ] \models [X]\varphi \), i.e. \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models \varphi \), by Lemma 8 (i) and (iii).

The induction cases. The cases where \(\varphi =\psi \wedge \chi , \varphi =\lnot (\psi \wedge \chi )\), or \(\varphi =\lnot \lnot \psi \) are straightforward using closure under the rules \((\wedge ), (\lnot \wedge )\), and \((\lnot \lnot )\) and Lemma 8 (ii) and (iii).

The case \(\varphi = F \psi \). Assume that \(X.\sigma .\alpha F \psi \) occurs on \(\varTheta \) and that \([\alpha ] \asymp ^{\varTheta }_{\sigma } [\beta ]\) in \((\mathcal {M}^\varTheta )^X\). The later is equivalent to \([\alpha ] \asymp ^{\varTheta }_{\sigma } [\beta ]\) in \(\mathcal {M}^\varTheta \), as general learning updates do not change the network relations. By definition this means that \(\emptyset .\sigma .\alpha \asymp \sigma .\beta \in \varTheta \), which further implies \(X.\sigma .\alpha \asymp \sigma .\beta \in \varTheta \) by closure under the \((\mathcal {L}\asymp )\) rule. By closure under the (F) rule we further obtain \(X.\sigma .\beta \psi \in \varTheta \) and therefore, by the induction hypothesis, that \((\mathcal {M}^\varTheta )^X, \sigma , [\beta ] \models \psi \). Thus, \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models F \psi \) follows (as \(\beta \) was arbitrary).

The case \(\varphi = \lnot F \psi \). Assume that \(X.\sigma .\alpha \lnot F \psi \) occurs on \(\varTheta \). By closure under the \((\lnot F)\) rule we obtain \(X.\sigma .\alpha \asymp \sigma .\beta \in \varTheta \) and \(X.\sigma .\beta \lnot \psi \in \varTheta \) for some new \(\beta \). This implies \([\alpha ] \asymp ^{\varTheta }_{\sigma } [\beta ]\) in \(\mathcal {M}^\varTheta \) by closure under the \((\mathcal {L}\asymp )\) rule and thus that \([\alpha ] \asymp ^{\varTheta }_{\sigma } [\beta ]\) in \((\mathcal {M}^\varTheta )^X\). Moreover \((\mathcal {M}^\varTheta )^X, \sigma , [\beta ] \models \lnot \psi \) follows by the induction hypothesis. Hence \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models \lnot F \psi \).

The case \(\varphi = @_i \psi \). Assume that \(X.\sigma .\alpha @_i \psi \) occurs on \(\varTheta \). By closure under the (@) rule we obtain \(X.\sigma .\beta i \in \varTheta \) and \(X.\sigma .\beta \psi \in \varTheta \) for some new \(\beta \). Therefore, by the induction hypothesis, we have that both \((\mathcal {M}^\varTheta )^X, \sigma , [\beta ] \models i\) and \((\mathcal {M}^\varTheta )^X, \sigma , [\beta ] \models \psi \). But this implies that \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models @_i \psi \).

The case \(\varphi = \lnot @_i \psi \) is similar to the case \(\varphi = @_i \psi \). \(\square \)

We can now prove the following key lemma:

Lemma 10

Let \(\varTheta \) be an open saturated branch and let \(\mathcal {M}^\varTheta \) be the canonical model defined in Definition 15. If \(X.\sigma .\alpha \varphi \in \varTheta \) for some formula \(\varphi \), then

$$\begin{aligned} Y.\sigma .\alpha \sim \tau .\alpha \in \varTheta \quad \textit{ iff } \quad \ \sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } (\mathcal {M}^\varTheta )^Y \ , \end{aligned}$$
(4)

for all \(Y \subseteq X\).

Proof

The proof goes by induction on the size of Y. The base case is thus the case were \(Y = \emptyset \). Since \((\mathcal {M}^\varTheta )^\emptyset = \mathcal {M}^\varTheta \), (4) follows directly from the definition of \(\mathcal {M}^\varTheta \) in this case.

For the induction step, assume that (4) is true for all \(Y' \subseteq X\) with \(|Y'| < n\) and assume that \(Y \subseteq X\) is such that \(|Y| = n\). We need to prove that (4) holds for Y.

\(\Rightarrow \)” of (4): Assume that \(Y.\sigma .\alpha \sim \tau .\alpha \in \varTheta \). Let \(Y^* = Y \setminus \{\psi \}\) for some \(\psi \in Y\). Then, by closure under the \((\mathcal {L}\sim )\) rule we have \(Y^*.\sigma .\alpha \sim \tau .\alpha \in \varTheta \), which further implies that \(\sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } (\mathcal {M}^\varTheta )^{Y^*}\) by induction. Because \((\mathcal {M}^\varTheta )^Y = ((\mathcal {M}^\varTheta )^{Y^*})^{\{\psi \}}\) (by Lemma 7), to prove that \(\sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } (\mathcal {M}^\varTheta )^{Y}\), amounts to proving that for all \([\beta ] \in A^\varTheta \) if \([\alpha ] \asymp _\sigma [\beta ]\) then:

$$\begin{aligned}&(\mathcal {M}^\varTheta )^{Y^*}, \sigma , [\beta ] \models \psi \ \textit{ and }\ (\mathcal {M}^\varTheta )^{Y^*}, \tau , [\beta ] \models \psi \nonumber \\&\ \textit{ or } \&(\mathcal {M}^\varTheta )^{Y^*}, \sigma , [\beta ] \models \lnot \psi \ \textit{ and }\ (\mathcal {M}^\varTheta )^{Y^*}, \tau , [\beta ] \models \lnot \psi \ . \end{aligned}$$
(5)

So assume that \([\alpha ] \asymp _\sigma [\beta ]\) for some \([\beta ] \in A^\varTheta \). This implies that \(\emptyset .\sigma .\alpha \asymp \sigma .\beta \in \varTheta \) by definition and further, by closure under the \((\mathcal {L}\asymp )\) rule, that \(Y.\sigma .\alpha \asymp \sigma .\beta \in \varTheta \). But then by closure under the \((X^-)\) rule, either \(Y^*.\sigma .\beta \psi , Y^*.\tau .\beta \psi \in \varTheta \) or \(Y^*.\sigma .\beta \lnot \psi , Y^*.\tau .\beta \lnot \psi \in \varTheta \). Since \(\psi \) is a \(\mathcal {L^-}\) formula, it now follows by Lemma 9 that (5) is satisfied. This completes the proof of the left-to-right direction of (4) for Y.

\(\Leftarrow \)” of (4): Assume that \(\sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } (\mathcal {M}^\varTheta )^Y\). Again, let \(Y^* = Y \setminus \{\psi \}\) for some \(\psi \in Y\). Then by Lemma 7, \(\sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } ((\mathcal {M}^\varTheta )^{Y^*})^{\{\psi \}}\), which further implies, by definition of the general learning update, that \(\sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } (\mathcal {M}^\varTheta )^{Y^*}\). Thus, by induction \(Y^*.\sigma .\alpha \sim \tau .\alpha \in \varTheta \). But then, by closure under the \((X^+)\) rule:

(6)

Now, assume \(({ ii})\) is the case. By Lemma 9 this implies that there is \([\beta ] \in A^\varTheta \) with \([\alpha ] \asymp _\sigma [\beta ]\) such that \((\mathcal {M}^\varTheta )^{Y^*}, \sigma , [\beta ] \models \psi \) and \((\mathcal {M}^\varTheta )^{Y^*}, \tau , [\beta ] \models \lnot \psi \). However, from \(\sigma \sim ^\varTheta _{[\alpha ]} \tau \textit{ in } ((\mathcal {M}^\varTheta )^{Y^*})^{\{\psi \}}\) it follows that for all \([\beta ] \in A^\varTheta \) if \([\alpha ] \asymp _\sigma [\beta ]\) then:

$$\begin{aligned}&(\mathcal {M}^\varTheta )^{Y^*}, \sigma , [\beta ] \models \psi \ \textit{ and }\ (\mathcal {M}^\varTheta )^{Y^*}, \tau , [\beta ] \models \psi \\&\ \textit{ or } \&(\mathcal {M}^\varTheta )^{Y^*}, \sigma , [\beta ] \models \lnot \psi \ \textit{ and }\ (\mathcal {M}^\varTheta )^{Y^*}, \tau , [\beta ] \models \lnot \psi \ . \end{aligned}$$

Thus, we have reached a contradiction. Similarly, (iii) of (6) leads to a contradiction and (i) of (6) has to be the case, i.e. \(Y.\sigma .\alpha \sim \tau .\alpha \in \varTheta \). This complete the the right-to-left direction of (4) for Y. \(\square \)

We can now finally provide the essential truth lemma that will ensure us completeness:

Lemma 11

(Truth lemma) Let \(\varTheta \) be an open saturated branch of a tableau, \(\varphi \) a formula of \(\mathcal {L_{KL}}\), and X a finite set of \(\mathcal {L^-}\)-formulas. If \(X.\sigma .\alpha \varphi \in \varTheta \), then \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models \varphi \).

Proof

The proof goes by induction on the complexity of \(\varphi \).

The base cases and the cases for \(\varphi = \psi \wedge \chi , \varphi = \lnot (\psi \wedge \chi ), \varphi = \lnot \lnot \psi , \varphi = F \psi , \varphi = \lnot F \psi , \varphi = @_i \psi \), and \(\varphi = \lnot @_i \psi \) are all dealt with as in the proof of Lemma 9.

The case \(\varphi = K \psi \). Assume that \(X.\sigma .\alpha K \psi \) occurs on \(\varTheta \) and that \(\sigma \sim ^{\varTheta }_{[\alpha ]} \tau \) in \((\mathcal {M}^\varTheta )^X\). By Lemma 10, the latter implies \(X.\sigma .\alpha \sim \tau .\alpha \in \varTheta \). Then, by closure under the (K) rule we obtain \(X.\tau .\alpha \psi \in \varTheta \) and therefore, by the induction hypothesis, that \((\mathcal {M}^\varTheta )^X, \tau , [\alpha ] \models \psi \). Since \(\tau \) was arbitrary with \(\sigma \sim ^{\varTheta }_{[\alpha ]} \tau \) in \((\mathcal {M}^\varTheta )^X\), we obtain \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models K \psi \), as desired.

The case \(\varphi = \lnot K \psi \). Assume that \(X.\sigma .\alpha \lnot K \psi \) occurs on \(\varTheta \). By closure under the \((\lnot K)\) rule we obtain \(X.\sigma .\alpha \sim \tau .\alpha \in \varTheta \) and \(X.\tau .\alpha \lnot \psi \in \varTheta \) for some new \(\tau \). The first implies, by Lemma 10, that \(\sigma \sim ^{\varTheta }_{[\alpha ]} \tau \) in \((\mathcal {M}^\varTheta )^X\), and the later, by induction, implies that \((\mathcal {M}^\varTheta )^X, \tau , [\alpha ] \models \lnot \psi \). It then follows that \((\mathcal {M}^\varTheta )^X, \sigma , [\alpha ] \models \lnot K \psi \).

The case \(\varphi = [\mathcal {L}]\psi \). Assume that \(X.\sigma .\alpha [\mathcal {L}]\psi \) occurs on \(\varTheta \). Then, by closure under the \(([\mathcal {L}])\) rule we obtain \(X\!\cup \!\mathcal {L}.\sigma .\alpha \psi \in \varTheta \). By induction, this implies that \((\mathcal {M}^\varTheta )^{X\!\cup \!\mathcal {L}}, \sigma , [\alpha ]\models \psi \). Thus, by Lemma 7 we obtain (\(\mathcal {M}^\varTheta )^X, \sigma , [\alpha ]\models [\mathcal {L}]\psi \).

The case \(\varphi = \lnot [\mathcal {L}]\psi \) is similar with the use of the \((\lnot [\mathcal {L}])\) rule and of Lemma 8 (iii). \(\square \)

From this truth lemma completeness easily follows:

Theorem 1

(Completeness for \(\mathsf {KL})\) If \(\varphi \) is valid in \(\mathsf {KL}\), then there is a tableau proof of \(\varphi \).

Proof

The proof goes by contraposition. Assume there is no tableau proof of \(\varphi \). This means that there is tableau starting with \(\emptyset .\sigma .\alpha \lnot \varphi \) that has an open saturated branch \(\varTheta \) (using Lemma 1). From this branch we can build the canonical model \(\mathcal {M}^\varTheta \). Finally, it follows from Lemma 11 that \(\mathcal {M}^\varTheta , \sigma , [\alpha ] \models \lnot \varphi \), which further implies that \(\varphi \) cannot be valid. \(\square \)

Note that, if one leaves out the rules \(([\mathcal {L}]), (\lnot [\mathcal {L}]), (\emptyset ), (\lnot \emptyset ), (\mathcal {L} \asymp ), (\mathcal {L} \sim ), (X^-)\), and \((X^+)\) as well as dropping all the X’s and Y’s in the prefixes, one obtains a sound and complete tableau system for the Facebook logic of Seligman et al (2011), as previously discussed.

We did not include rules for the simple influence modality \([\mathcal {I}]\) of Christoff and Hansen (2013) in our tableau system (to obtain a tableau system for \(\mathsf {KIO}\)). The reason is, as we will show in the next subsection, that all \([\mathcal {I}]\) modalities can be reduced away in the presence of the learning modalities. This is a well-known technique in the tradition of Dynamic Epistemic Logic (see Ditmarsch et al. 2008). The reason that we did not use the same technique for the learning modalities \([\mathcal {L}]\) is that it does not seem possible. Note that according to the definition of the learning update \(\mathcal {M}^\mathcal {L}\) we need to talk about a formula q at the same time at two different worlds w and v. This is notoriously not possible in modal logic, but requires higher expressive logics on the epistemic dimension, like hybrid logic with the downarrow-binder. Thus, if we added the downarrow-binder to the epistemic dimension we might very well be able to reduce away the learning modality as well. We postulate that this cannot be done in our current logic. However, we leave it for future work to prove or disprove these claims.

The fact that we are “cutting” edges with the learning updates, not based on what is true at the current agent of evaluation, but at friends of that agent, seemingly make reduction axioms impossible, but it also highlight how our learning modalities differs significantly from the “link cutting” modalities of Benthem and Liu (2007), Aucher et al. (2009) and Kooi and Renne (2011).

Finally, note the essential use of Lemma 9 and how it was proved. Here we explicitly used that the formulas in a learning modality \([\mathcal {L}]\) did not contain any knowledge operators K. Moreover, it is not obvious how to generalize the proof to allow for this. However, it an important problem to solve as it may also limit us from having a tableau system for the full logic \(\mathsf {KIO}\) through a translation, as we will see in the next subsection.

1.2 An Embedding of \(\mathsf {KIO}\) Into \(\mathsf {KL}\)

To define a translation of \(\mathsf {KIO}\) into \(\mathsf {KL}\) we first need an operation on \(\mathcal {L^-}\) formulas:

Definition 16

(\(\mathcal {I}^{-1}\)) We define \(\mathcal {I}^{-1}: \mathcal {L^-} \rightarrow \mathcal {L^-}\) by:

  • For atomic formulas:

    • \(\mathcal {I}^{-1}({ ep})\) is \(\Big ((\langle F \rangle { ep} \wedge F { ep}) \vee \big ({ ip} \wedge (\langle F\rangle { ep} \vee \lnot \langle F \rangle { ec})\big )\Big )\);

    • \(\mathcal {I}^{-1}({ ec})\) is \(\Big ((\langle F \rangle { ec} \wedge F { ec}) \vee \big ({ ic} \wedge (\langle F\rangle { ec} \vee \lnot \langle F \rangle { ep})\big )\Big )\);

    • \(\mathcal {I}^{-1}({ en})\) is \(\begin{array}{rcl} \Big (\big ({ in} \wedge \big ( (\langle F \rangle { ep} \wedge \langle F \rangle { ec}) \vee (\lnot \langle F \rangle { ep} \wedge \lnot \langle F \rangle { ec})\big )\big ) \ \vee \ &{} &{} \\ \big ( \lnot ip \wedge \langle F \rangle { ep} \wedge \lnot \langle F \rangle { ec} \wedge \langle F \rangle { en} \big ) \ \vee \ &{} &{} \\ \big (\lnot { ic} \wedge \lnot \langle F \rangle { ep} \wedge \langle F \rangle { ec} \wedge \langle F \rangle { en}\big ) \Big ) &{} &{} \end{array}\);

  • If \(\varphi \) is not an atomic formula, \(\mathcal {I}^{-1}(\varphi )\) is the formula resulting from \(\varphi \) by substituting \(\mathcal {I}^{-1}(q)\) for all atomic formulas q occurring in \(\varphi \).

The definition is justified by the following lemma the following lemma:

Lemma 12

The following formulas are valid in \(\mathsf {KIO}\):

  • \(\mathcal {I}^{-1}({ ep}) \ \leftrightarrow \ [\mathcal {I}]{ ep}\)

  • \(\mathcal {I}^{-1}({ ec}) \ \leftrightarrow \ [\mathcal {I}]{ ec}\)

  • \(\mathcal {I}^{-1}({ en}) \ \leftrightarrow \ [\mathcal {I}]{ en}\)

Proof

This is straightforward from how the semantics of \([\mathcal {I}]\) is defined using the table of “Appendix 1”. \(\square \)

This lemma can be generalized a bit. In fact, we have for all \(\mathcal {L^-}\) formulas \(\varphi \), that

$$\begin{aligned} \mathcal {I}^{-1}(\varphi ) \ \leftrightarrow \ [\mathcal {I}]\varphi \end{aligned}$$
(7)

is a validity of \(\mathsf {KIO}\). For general learning modalities, we also make the following definition regarding \(\mathcal {I}^{-1}\):

Definition 17

(\(\mathcal {I}^{-1}(\mathcal {L})\)) For a learning modality \(\mathcal {L} \subseteq \mathcal {L^-}\), define:

$$\begin{aligned} \mathcal {I}^{-1}(\mathcal {L}) := \{ \mathcal {I}^{-1}(\varphi ) \ |\ \varphi \in \mathcal {L} \} \end{aligned}$$

We can now translate the language of \(\mathsf {KIO}\) into the language \(\mathcal {L_{KL}}\).

Definition 18

(The translation) Define \(t\,{:}\,\mathcal {L_{KIO}} \rightarrow \mathcal {L_{KL}}\) recursively in the following way:

$$\begin{aligned} \begin{array}{lllll} t(q) &{}= q &{}\quad t([\mathcal {I}]q) = \mathcal {I}^{-1}(q), \textit{for all } q \in \{{ ep}, { ec, en, ip, ic, in}\}\\ t(i) &{}= i &{}\quad t([\mathcal {I}] i ) = i\\ t(\lnot \varphi ) &{}= \lnot t(\varphi ) &{}\quad t([\mathcal {I}]\lnot \varphi ) = \lnot t([\mathcal {I}]\varphi )\\ t(\varphi \wedge \psi ) &{}= t(\varphi ) \wedge t(\psi ) &{}\quad t([\mathcal {I}](\varphi \wedge \psi )) = t([\mathcal {I}]\varphi ) \wedge t([\mathcal {I}]\psi )\\ t(F \varphi ) &{}= F t(\varphi )&{}\quad t([\mathcal {I}]F \varphi ) = F t([\mathcal {I}]\varphi )\\ t(@_i \varphi ) &{}= @_i t(\varphi ) &{}\quad t([\mathcal {I}]@_i \varphi ) = @_i t([\mathcal {I}]\varphi )\\ t(K \varphi ) &{}= K t(\varphi ) &{}\quad t([\mathcal {I}]K \varphi ) = K t([\mathcal {I}]\varphi )\\ t([\mathcal {L}] \varphi ) &{}= [\mathcal {L}] t(\varphi )&{}\quad t([\mathcal {I}] [\mathcal {L}] \varphi ) = [\mathcal {I}^{-1}(\mathcal {L})]t([\mathcal {I}]\varphi )\\ &{}&{}\quad t([\mathcal {I}][\mathcal {I}]\varphi ) = t([\mathcal {I}]t([\mathcal {I}]\varphi )) \end{array} \end{aligned}$$

Before we can prove the correctness of this translation, we need the following lemma:

Lemma 13

For all models \(\mathcal {M}\) and general learning modalities \(\mathcal {L}\):

$$\begin{aligned} (\mathcal {M}^{\mathcal {I}})^{\mathcal {L}} = (\mathcal {M}^{\mathcal {I}^{-1}(\mathcal {L})})^{\mathcal {I}} . \end{aligned}$$

Proof

Let \(\mathcal {M} = (A, W, (\asymp _w)_{w \in W}, (\sim _a)_{a \in A}, g, V), (\mathcal {M}^{\mathcal {I}})^{\mathcal {L}} = (A_1, W_1, (\asymp _w^1)_{w \in W}, (\sim _a^1)_{a \in A}, g_1, V_1)\), and \((\mathcal {M}^{\mathcal {I}^{-1}(\mathcal {L})})^{\mathcal {I}} = (A_2, W_2, (\asymp _w^2)_{w \in W}, (\sim _a^2)_{a \in A}, g_2, V_2)\). It is clear from the general definitions of \(\mathcal {M}^{\mathcal {I}}\) and \(\mathcal {M}^{\mathcal {L}}\), that \(W_1 = W_2 = W, A_1 = A_2 = A, g_1 = g_2 = g\), and \(\asymp _w^1=\asymp _w^2=\asymp _w\) (for all \(w \in W\)). Since a general learning update of a model \(\mathcal {M}\) does not change the valuation V, it is straightforward that \(V_1\) and \(V_2\) must be equal, as well. Thus, it only remains to show that \(\sim _a^1 = \sim _a^2\) (for all \(a \in A\)).

Now fix an \(a \in A\). Assume that \(w \sim _a^1 v\) (for some \(w, v \in W\)). By Definition 9 this is equivalent to:

$$\begin{aligned}&w \sim _a v \textit{ and, there is no agent } b \textit{ such that } a \asymp _w b \textit{ and;}\nonumber \\&\mathcal {M}^{\mathcal {I}}, w, b \models \varphi \textit{ and } \mathcal {M}^{\mathcal {I}}, v, b \models \lnot \varphi , \textit{ or } \nonumber \\&\mathcal {M}^{\mathcal {I}}, w, b \models \lnot \varphi \textit{ and } \mathcal {M}^{\mathcal {I}}, v, b \models \varphi , \textit{ for a } \varphi \in \mathcal {L}. \end{aligned}$$
(8)

(Note here “\(w \sim _a v\)” since the influence update \(\mathcal {I}\) does not change the \(\sim _a\) relations.) By (7) and Definition 17, (8) is further equivalent to:

$$\begin{aligned}&w \sim _a v \textit{ and, there is no agent } b \textit{ such that } a \asymp _w b \textit{ and;} \nonumber \\&\mathcal {M}, w, b \models \varphi \textit{ and } \mathcal {M}, v, b \models \lnot \varphi , \textit{ or } \nonumber \\&\mathcal {M}, w, b \models \lnot \varphi \textit{ and } \mathcal {M}, v, b \models \varphi , \textit{ for a } \varphi \in \mathcal {I}^{-1}(\mathcal {L}). \end{aligned}$$
(9)

This is exactly the condition for w and v being indistinguishable for agent a in the model \(\mathcal {M}^{\mathcal {I}^{-1}(\mathcal {L})}\). And again, since the influence update \(\mathcal {I}\) does not change the \(\sim _a\) relations, (9) is equivalent to \(w \sim _a^2 v\). Thus \(\sim _a^1 = \sim _a^2\) must be the case and the proof is complete. \(\square \)

We can now finally prove:

Lemma 14

(Embedding \(\mathsf {KIO}\) into \(\mathsf {KL})\) For all formulas \(\varphi \) of \(\mathcal {L_{KIO}}\), all models \(\mathcal {M} = (A, W, (\asymp _w)_{w \in W}, (\sim _a)_{a \in A}, g, V)\), all agents \(a \in A\), and all worlds \(w \in W\):

$$\begin{aligned} \mathcal {M}, w, a \models \varphi&\textit{ iff }&\mathcal {M}, w, a \models t(\varphi ) . \end{aligned}$$
(10)

Proof

The proof is by induction on the number of \([\mathcal {I}]\) modalities occurring in the formula \(\varphi \) with sub-induction on formula complexity. If \(\varphi \) contains no \([\mathcal {I}]\) modalities, (10) follows easily from the clauses in the left column of Definition 18.

Induction step. Assume that (10) has been proven for all formulas with k occurrences of the \([\mathcal {I}]\) modality (for a fixed \(k \in \mathbb {N}_0\)). Let \(\varphi \) be a formula with \(k+1\) occurrences of the \([\mathcal {I}]\) modality. That (10) holds for \(\varphi \) is now proven by induction on the formula complexity of \(\varphi \). The only interesting cases are when \(\varphi \) is on the form \([\mathcal {I}]\psi \). If \(\psi \) is an atomic formula q, then (10) follows from Lemma 12. If \(\psi \) is a nominal (10) is trivial.

Now if \(\psi \) is of the form \(\lnot \psi _0, \psi _0 \wedge \psi _1, F \psi _0, @_i \psi _0\), or \(K \psi _0\), (10) follows easily by noticing that the following are validities of \(\mathsf {KIO}\):

$$\begin{aligned} \begin{array}{l} \lnot [\mathcal {I}]\chi \leftrightarrow [\mathcal {I}]\lnot \chi \ ,\\ ([\mathcal {I}]\chi _0 \wedge [\mathcal {I}]\chi _1) \leftrightarrow [\mathcal {I}](\chi _0 \wedge \chi _1)\,\\ F [\mathcal {I}]\chi \leftrightarrow [\mathcal {I}]F \chi \ ,\\ @_i [\mathcal {I}]\chi \leftrightarrow [\mathcal {I}]@_i \chi \ ,\\ K [\mathcal {I}]\chi \leftrightarrow [\mathcal {I}]K \chi \ . \end{array} \end{aligned}$$

Now assume that \(\psi \) is on the form \([\mathcal {L}]\psi _0\). By Lemma 13 and induction, we get the following equivalences:

The final case when \(\psi \) in on the form \([\mathcal {I}]\psi _0\) only applies if \(k \ge 1\), so assume this. Since \([\mathcal {I}]\psi _0\) only contains k occurences of the \([\mathcal {I}]\) modality it follows by induction that \([\mathcal {I}]\psi _0\) is semantically equivalent to \(t([\mathcal {I}]\psi _0)\). Thus \([\mathcal {I}][\mathcal {I}]\psi _0\) is semantically equivalent to \([\mathcal {I}]t([\mathcal {I}]\psi _0\)). This later formula has only one occurrence of the \([\mathcal {I}]\) modality, which, again by induction, implies that it is equivalent to \(t([\mathcal {I}]t([\mathcal {I}]\psi _0))\). This completes the final case. \(\square \)

Corollary 1

(Validities of \(\mathsf {KIO}\) and \(\mathsf {KL})\) For all formulas \(\varphi \) of \(\mathcal {L_{KIO}}, \varphi \) is valid in \(\mathsf {KIO}\) if, and only if, \(t(\varphi )\) is valid \(\mathsf {KL}\).

Hence, with this corollary, whenever one wants to prove that \(\varphi \) is a validity of \(\mathsf {KIO}\) one can simply use the tableau system of the previous section to prove that \(t(\varphi )\) is a validity of \(\mathsf {KL}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Christoff, Z., Hansen, J.U. & Proietti, C. Reflecting on Social Influence in Networks. J of Log Lang and Inf 25, 299–333 (2016). https://doi.org/10.1007/s10849-016-9242-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10849-016-9242-y

Keywords

Navigation