Flowbased reputation with uncertainty: evidencebased subjective logic
 1.8k Downloads
 2 Citations
Abstract
The concept of reputation is widely used as a measure of trustworthiness based on ratings from members in a community. The adoption of reputation systems, however, relies on their ability to capture the actual trustworthiness of a target. Several reputation models for aggregating trust information have been proposed in the literature. The choice of model has an impact on the reliability of the aggregated trust information as well as on the procedure used to compute reputations. Two prominent models are flowbased reputation (e.g., EigenTrust, PageRank) and subjective logicbased reputation. Flowbased models provide an automated method to aggregate trust information, but they are not able to express the level of uncertainty in the information. In contrast, subjective logic extends probabilistic models with an explicit notion of uncertainty, but the calculation of reputation depends on the structure of the trust network and often requires information to be discarded. These are severe drawbacks. In this work, we observe that the ‘opinion discounting’ operation in subjective logic has a number of basic problems. We resolve these problems by providing a new discounting operator that describes the flow of evidence from one party to another. The adoption of our discounting rule results in a consistent subjective logic algebra that is entirely based on the handling of evidence. We show that the new algebra enables the construction of an automated reputation assessment procedure for arbitrary trust networks, where the calculation no longer depends on the structure of the network, and does not need to throw away any information. Thus, we obtain the best of both worlds: flowbased reputation and consistent handling of uncertainties.
Keywords
Reputation systems Evidence theory Subjective logic Flowbased reputation models1 Introduction
Advances in ICT and the increasing use of the Internet have resulted in changes in the way people do everyday things and interact with each other. Everything people do happens online, significantly increasing the number of business transactions carried out daily over the Internet. Often, users have to decide whether to interact with services or users with whom they have never interacted before. Uncertainty about services and users’ behavior is often perceived as a risk [2] and, thus, it can restrain a user from engaging in a transaction with unknown parties. Therefore, to fully exploit the potential of online services, platforms and ultimately online communities, it is necessary to establish and manage trust among the parties involved in a transaction [11, 40, 43].
Reputation is widely adopted to build trust among users in online communities where users do not know each other beforehand. The basic idea underlying reputation is that a user’s past experience as well as the experience of other users influences his decision whether to repeat this interaction in the future. Thus, reputation provides an indication of services’ and users’ trustworthiness based on their past behavior [36]. When a user has to decide whether to interact with another party, he can consider its reputation and start the transaction only if it is trustworthy. Therefore, a reputation system, which helps managing reputations (e.g., by collecting, distributing and aggregating feedback about services and users’ behavior), becomes a fundamental component of the trust and security architecture of any online service or platform [42].
The application and adoption of reputation systems, however, rely on their ability to capture the actual trustworthiness of the parties involved in a transaction [41]. The quality of a reputation value depends on the amount of information used for its computation [10, 17, 27]. A reputation system should use “sufficient” information. However, it is difficult to establish the minimum amount of information required to compute reputation; also, different users may have a different perception based on their risk attitude [2]. For instance, some users may accept to interact with a party which has a high reputation based on very few past transactions, while other users might require more evidence of good behavior. Therefore, a reputation system should provide a level of confidence in the computed reputation, for instance based on the amount of information used in the computation [15, 32]. This additional information will provide deeper insights to users, helping them decide whether to engage in a transaction or not. In addition, reputation systems should provide an effective and preferably automated method to aggregate the available trust information and compute reputations from it.
Reputation systems usually rely on a mathematical model to aggregate trust information and compute reputation [13]. Several mathematical models for reputation have been proposed in the literature. These models can be classified with respect to the used mathematical foundations, e.g., summation and averaging [14], probabilistic models [15, 21, 28], flowbased [6, 23, 25, 35], fuzzy metrics [3, 38]. As pointed out in [42], the choice of the type of model has an impact on the type and amount of trust information as well as on the procedure used to compute reputation.
Among the others, two prominent reputation models are the flowbased model and subjective logic (SL) [15]. Flowbased reputation models use Markov chains as the mathematical foundation. Flowbased models provide an automated method to aggregate all available trust information. However, they are not able to express the level of confidence in the obtained reputation values. On the other hand, SL is rooted in the wellknown Dempster–Shafer theory [34]. SL provides a mathematical foundation to deal with opinions and has the natural ability to express uncertainty explicitly. Intuitively, uncertainty incorporates a margin of error into reputation calculation due to the (limited) amount of available trust information. SL uses a consensus operator ‘\(\oplus \)’ to fuse independent opinions and a discounting operator ‘\(\otimes \)’ to compute trust transitivity. This makes SL a suitable mathematical framework for handling trust relations and reputation, especially when limited evidence is available. However, the consensus operator is rooted in the theory of evidence, while the discounting operator is based on a probabilistic interpretation of opinions. The different nature of these operators leads to a lack of “cooperation” between them. As a consequence, the calculation of reputation depends on the shape of the trust network, the graph of interactions, in which nodes represent the entities in the system and edges are labeled with opinions. Depending on the structure of the trust network, some trust information may have to be discarded to enable SLbased computations.

We observe that the discounting rule ‘\(\otimes \)’ in SL does not have a natural interpretation in terms of evidence handling. We give examples of counterintuitive behavior of the \(\otimes \) operation.

We present a brief inventory of the problems that occur when one tries to combine SL with flowbased reputation metrics.

We present a simplified justification of the mapping between evidence and opinions in SL.

We introduce a new scalar multiplication operation in SL which corresponds to a multiplication of evidence. Our scalar multiplication is consistent with the consensus operation (which amounts to addition of evidence) and hence satisfies a distribution law, namely \(\alpha \cdot (x\oplus y)=(\alpha \cdot x)\oplus (\alpha \cdot y)\).

We introduce a new discounting rule \(\boxtimes \). It represents the flow of evidence from one party to another. During this flow, lack of trust in the party from whom evidence is received is translated into a reduction in the amount of evidence; this reduction is implemented using our new scalar multiplication rule. Our new discounting rule satisfies \(x\boxtimes (y\oplus z)=(x\boxtimes y)\oplus (x\boxtimes z)\). This rightdistribution property resolves one of the problems of SL.

We show that replacing SL’s discounting operation \(\otimes \) by the new \(\boxtimes \) solves all the problems that usually occur when one tries to combines flowbased reputation with SL, in particular the problem of evidence double counting and the ensuing necessity to make computations graphdependent and to discard information.

Using EBSL, we construct a simple iterative algorithm that computes reputation for arbitrary trust networks without discarding any information. Thus, we achieve our desideratum of automated computation of flowbased reputation with uncertainty.
The remainder of the paper is organized as follows. The next section presents an overview of flowbased reputation models and SL. Section 3 discusses the limitations of SL, and Sect. 4 illustrates these limitations when combining flowbased reputation and SL. Section 5 revisits SL and introduces the new EBSL scalar multiplication and discounting operators. Section 6 presents our flowbased reputation model with uncertainty along with an iterative algorithm that computes reputation for arbitrary trust networks. Section 7 presents an evaluation of our approach using both synthetic and real data. Finally, Sect. 8 discusses related work, and Sect. 9 concludes the paper providing directions for future work.
2 Preliminaries
In this section, we present an overview of flowbased reputation models (based on [35]) and of subjective logic (based on [15]). We also introduce the notation used in the remainder of the paper.
2.1 Flowbased reputation
Flowbased reputation systems [6, 23, 25, 35] are based on the notion of transitive trust. Intuitively, if an entity i trusts an entity j, it would also have some trust in the entities trusted by j.
Typically, each time user i has a transaction with another user j, she may rate the transaction as positive, neutral or negative. In a flowbased reputation model, these ratings are aggregated in order to obtain a Markov chain. The reputation vector (i.e., the vector containing all reputation values) is computed as the steadystate vector of the Markov chain; one starts with a vector of initial reputation values and then repeatedly applies the Markov step until a stable state has been reached. This corresponds to taking more and more indirect evidence into account.
Below, we present the metric proposed in [35] (with slightly modified notation) as an example of a flowbased reputation system.
Example 1
Equation (1) can be read as follows. To determine the reputation of user x we first take into account the direct information about x. From this, we can compute \(s_x\), the reputation initially assigned to x if no further information is available. However, additional information may be available, namely the aggregated ratings in A. The weight of direct versus indirect information is accounted for by the parameter \(\alpha \). If no direct information about x is available, the reputation of x can be computed as \(r_x = \sum _y(r_y / \ell )A_{yx}\), i.e., a weighted average of the reputation values \(A_{yx}\) with weights equal to the normalized reputations. Adding the two contributions, with weights \(\alpha \) and \(1\alpha \), yields (1): a weighted average over all available information.
The equation for solving the unknown \(\mathbf{r}\) contains \(\mathbf r\). A solution is obtained by repeatedly substituting (1) into itself until a stable state has been reached. This is the steady state of the Markov chain. It was shown in [35] that Eq. (1) always has a solution and that the solution is unique.
Intuitively, A can be seen as an adjacency matrix of a trust network where nodes represent entities and edges represent the direct trust that entities have in other entities based on direct experience. Based on the results presented in [35], Eq. (1) can be applied to assess reputation for arbitrary trust networks.
2.2 Subjective logic
Subjective logic (SL) is a trust algebra based on Bayesian theory and Boolean logic that explicitly models uncertainty and belief ownership. In the remainder of this section, we provide an overview of SL based on [15].
The central concept in SL is the threecomponent opinion.
Definition 1
(Opinion and opinion space) [15] An opinion x about some proposition P is a tuple \(x=(x_\mathrm{b},x_\mathrm{d},x_\mathrm{u})\), where \(x_\mathrm{b}\) represents the belief that P is provable (belief), \(x_\mathrm{d}\) the belief that P is disprovable (disbelief), and \(x_\mathrm{u}\) the belief that P is neither provable nor disprovable (uncertainty). The components of x satisfy \(x_\mathrm{b}+x_\mathrm{d}+x_\mathrm{u}=1\). The space of opinions is denoted as \(\varOmega \) and is defined as \(\varOmega =\{(b,d,u)\in [0,1]^3 \; \; b+d+u=1\}\).
An opinion x with \(x_\mathrm{b}+x_\mathrm{d}<1\) can be seen as an incomplete probability distribution. In order to enable the computation of expectation values, SL extends the threecomponent opinion with a fourth parameter ‘a’ called relative atomicity, with \(a\in [0,1]\). The probability expectation is \(E(x)=x_\mathrm{b}+x_\mathrm{u}a\). In this paper, we will omit the relative atomicity from our notation, because in our context (trust networks) it is not modified by any of the computations on opinions. In more complicated situations, however, the relative atomicity is modified in nontrivial ways.
Intuitively, the amount of positive and negative evidence about a proposition determines the belief and the disbelief in the proposition, respectively. Increasing the total amount of evidence (e) reduces the uncertainty. Note that there is a fundamental difference between an opinion where a proposition is equally provable and disprovable and one where we have complete uncertainty about the proposition. For instance, opinion (0, 0, 1) indicates that there is no evidence either supporting or contradicting the proposition, i.e., \(n=p=0\), whereas opinion (0.5, 0.5, 0) indicates that \(n=p=\infty \).
We use the notation \(p(x)\mathop {=}\limits ^\mathrm{def}2\frac{x_\mathrm{b}}{x_\mathrm{u}}\) to denote the amount of supporting evidence underlying opinion x, and likewise \(n(x)\mathop {=}\limits ^\mathrm{def}2\frac{x_\mathrm{d}}{x_\mathrm{u}}\) for the amount of ‘negative’ evidence. Moreover, we use the notation \(e(x)=p(x)+n(x)\) to represent the total amount of evidence underlying opinion x.
Definition 2
Equation (5) precisely corresponds to (4). Note that \(x\oplus y=y\oplus x\). Furthermore, the consensus operation is associative, i.e., \(x\oplus (y\oplus z)=(x\oplus y)\oplus z\). These properties are exactly what one intuitively expects from an operation that combines evidence. It is worth noting that the evidence has to be independent for the \(\oplus \) rule to apply. Combining dependent evidence would lead to the problem of doublecounting evidence. We formalize and discuss this problem in Sect. 3.3.
The second important operation in SL is the transfer of opinions from one party to another. Consider the following scenario. Alice has opinion x about Bob’s trustworthiness. Bob has opinion y about some proposition P. He informs Alice of his opinion. Alice now has to form an opinion about P. The standard solution to this problem is that Alice applies an xdependent weight to Bob’s opinion y [4, 23, 26, 31, 35]. This is the socalled discounting. The following formula is usually applied.
Definition 3
It holds that \(x\otimes y\ne y\otimes x\) and that \(x\otimes (y\otimes z)=(x\otimes y)\otimes z\). The discounting rule (6) is not distributive w.r.t. consensus, i.e., \((x\oplus y)\otimes z\ne (x\otimes z)\oplus (y\otimes z)\) and \(x\otimes (y\oplus z)\ne (x\otimes y)\oplus (x\otimes z)\).
In SL, a trust network can be modeled with a combination of consensus and discounting operators. The consensus operator is used to aggregate trust information from different sources, while the discounting operator is used to implement trust transitivity. Note that, in a trust network, SL distinguishes two types of trust relationship: functional trust, which represents the opinion about an entity’s ability to provide a specific function, and referral trust, which represents the opinion about an entity’s ability to provide recommendations about other entities. Referral trust is assumed to be transitive, and a trust chain is said to be valid if the last edge of the chain represents functional trust and all previous edges represent referral trust.
3 Limitations of subjective logic
Our desideratum is a novel reputation metric that has the advantages of both SL and flowbased models. On one hand, we aim at an automated procedure for computing reputation as in flowbased approaches. On the other hand, we aim to determine the confidence in reputation values by making uncertainty explicit as in SL. In this section, we discuss the limitations of SL. Then, in Sect. 4, we show how these limitations affect a naïve approach that combines flowbased reputation and SL.
3.1 Dogmatic opinions
Definition 4
The special points B, D, U behave as follows regarding the consensus operation: \(B\oplus x=B\); \(D\oplus x=D\); \(U\oplus x=x\). The full uncertainty U behaves like an additive zero.
With respect to the discounting rule, the special points B, D, U behave as \(B\otimes x=x\), \(D\otimes x=U\), \(U\otimes x=U\), \(x\otimes U=U\), \(x\otimes B=(x_\mathrm{b},0,1x_\mathrm{b})\), \(x\otimes D=(0,x_\mathrm{b},1x_\mathrm{b})\).
Opinions that have \(u=0\) (i.e., lying on the line between B and D) are called ‘dogmatic’ opinions. They have to be treated with caution, since they have \(e=\infty \) and therefore overwhelm other opinions when the consensus \(\oplus \) is applied. We will come back to this issue in Sect. 5.1.
3.2 Counterintuitive behavior of the \(\otimes \) operation
Example 2
Let \(x,y\in \varOmega \), with \(n(x)=0\), \(n(y)=0\) and \(p(y)\gg p(x)\). Then, \(p(x\otimes y)\approx p(x)/4\).
Example 3
Let \(x,y\in \varOmega \), with \(n(x)=0\), \(p(y)=0\) and \(n(y)\gg p(x)\). Then, \(n(x\otimes y)\approx p(x)/4\).
In both these examples, y is based on a lot of evidence; but even if x contains a lot of belief, none of y’s evidence survives in \(x\otimes y\). We conclude that the discounting operation \(\otimes \) gives counterintuitive results.
The \(\otimes \) rule is inspired by a probabilistic interpretation of opinions. The probabilistic interpretation might suggest that it is natural to multiply probabilities, i.e., that the expressions \((x\otimes y)_\mathrm{b}=x_\mathrm{b}y_\mathrm{b}\) and \((x\otimes y)_\mathrm{d}=x_\mathrm{b}y_\mathrm{d}\) are intuitively correct. However, we argue that this is not at all selfevident. When discounting y through x, the uncertainties in x induce an xdependent probability distribution on y. This can be thought of as an additional layer of uncertainty about beta distributions. Let (3) describe opinion y; then, the discounting through x introduces uncertainty about the parameters p and n in the equation (a probability distribution on p and n). It is not at all selfevident that the resulting opinion is \(x\otimes y\) as prescribed by Definition 3. It would make equal sense to replace the discounting factor \(x_\mathrm{b}\) by, e.g., the expectation \(x_\mathrm{b}+ax_\mathrm{u}\). In this paper, we do not pursue such an approach based on distributions, but we mention it in order to point out that the \(\otimes \) rule is not necessarily wellfounded.
Example 4
Alice has opinion x about Bob’s trustworthiness in providing recommendations. Bob experiments with chocolate on Monday and forms an opinion y about its medicinal qualities. On Tuesday, he does some more of the same kind of experiments and forms an independent opinion z. He informs Alice of y, z and his final opinion \(y\oplus z\). What should Alice think about the medicinal qualities of chocolate? One approach is to say that Alice should appraise opinions y and z separately, yielding \((x\otimes y)\oplus (x\otimes z)\) (Note that the two occurrences of x represent the very same opinion, i.e., the evidence underlying the two occurrences is the same). Another approach is to weight Bob’s combined opinion, yielding \(x\otimes (y\oplus z)\). Intuitively, the two approaches should yield exactly the same opinion, yet the SL rules give (10).
We now present a numerical example that illustrates the issue discussed above.
Example 5
Figure 1 shows the trust network representing the scenario in Example 4. To highlight that opinions x and y are independent, in the figure we abuse the network notation and duplicate the node representing Bob: \(B_1\) represents Bob on Monday, and \(B_2\) represents Bob on Tuesday. The edge between Alice (A) and Bob (dashed rectangle) represents Alice’s opinion x about Bob’s recommendations. This opinion concerns Bob’s recommendations regardless of when they are formed (e.g., on Monday or on Tuesday).
 1.
\(w=(x\otimes y)\oplus (x\otimes z)=(0.314,0.341,0.345)\)
 2.
\(w=x\otimes (y\oplus z)=(0.227,0.324,0.449)\)
3.3 Double counting of evidence in trust networks
The \(\oplus \) rule imposes constraints on the evidence that can be aggregated: It requires evidence to be independent [15]. In the literature, however, there is no welldefined notion of evidence independence. Some researchers [33, 45] assume that pieces of evidence are independent if they are obtained from independent sources, where two sources are said to be independent if they measure completely unrelated features. This definition, however, is too restrictive. For instance, the evidence collected by a sensor at different points in time can also be independent.
In this work, we adopt a notion of opinion independence based on the independence of evidence, which we define in the same way as independence of random variables. We adopt the usual notation: Names of random variables are written in capitals, and numerical realizations of random variables are denoted with lowercase letters.
Definition 5
(Independent evidence) Let \(E_i\) and \(E_j\) be evidencevalued random variables. We say that \(E_i\) and \(E_j\) are independent if and only if \(\mathrm{Prob}(E_i=e_iE_j=e_j) = \mathrm{Prob}(E_i=e_i)\).
The definition above can be extended to opinions.
Definition 6
(Independent opinions) Let X and Y be \(\varOmega \)valued random variables. We say that X and Y are independent if and only if the evidence underlying X and Y is independent.
Combining dependent evidence leads to the problem of doublecounting evidence.
Definition 7
(Double counting) Let X and Y be \(\varOmega \)valued random variables. In an expression of the form \(X \oplus Y\), we say that there is double counting if there is dependence between the evidence underlying X and the evidence underlying Y.
Intuitively, dependent evidence shares “part” of the evidence. Therefore, aggregating dependent evidence leads to counting some part of the evidence more than once.^{1}
Example 6
Consider the expression \((Y\otimes X)\oplus (Z\otimes X)\), where both occurrences of X are obtained from the same observation. The evidence underlying X is contained on the left side as well as the right side of the ‘\(\oplus \).’ This is a clear case of double counting.
Example 7
Consider the expression \((X\otimes Y)\oplus (X\otimes Z)\), again with both instances ‘X’ coming from the same observation. The evidence underlying X is contained on the left side as well as the right side of the ‘\(\oplus \),’ but less evidently than in Example 6, because now X is used for discounting. In Sect. 3.2, we showed that the \(\otimes \) rule causes evidence from X to end up in \(X\otimes Y\) in a complicated way. Hence, the opinions \(X\otimes Y\) and \(X\otimes Z\) are not independent, even if Y and Z are independent. This causes double counting of X in the expression \((X\otimes Y)\oplus (X\otimes Z)\).
It is worth noting that double counting of X in the expression \((X\otimes Y)\oplus (X\otimes Z)\) can also be observed in Example 5. Indeed, the uncertainty in \((X\otimes Y)\oplus (X\otimes Z)\) is lower than the uncertainty in \(X \otimes (Y\oplus Z)\), indicating that the result contains more evidence when the trust network is represented using the first expression.
To avoid the problem of double counting, SL requires that the trust network is expressed in a canonical form [20, 22], where all trust paths are independent. Intuitively, a trust network expression is in canonical form if every edge appears only once in the expression.
Example 8
Consider the two trust network expressions representing the trust network in Fig. 1 (Example 4): \((x\otimes y)\oplus (x\otimes z)\) and \(x\otimes (y\oplus z)\). The first expression is not in canonical form as opinion x occurs twice in the expression; the second expression is in canonical form as every edge appears only once in the expression. Thus, \(x\otimes (y\oplus z)\) is the proper representation of the trust network in Fig. 1.
In the next section, we show that it is not always possible to express a trust network in canonical form. As suggested in [20, 22], this issue can be addressed by removing some edges from the network. This, however, means discarding part of the trust information, thus reducing the quality of reputation values.
4 Combining flowbased reputation and subjective logic
This section presents a naïve approach that combines flowbased reputation and SL. We illustrate the limitations of such a naïve approach.
We first introduce some notation and definitions. Flowbased reputation models usually assume that users who are honest during transactions are also honest in reporting their ratings [23]. This assumption, however, does not hold in many reallife situations [1]. Thus, as it is done in SL, we distinguish between referral trust and functional trust (see Sect. 2). We use A to represent direct referral trust and T to represent direct functional trust. R denotes the final referral trust and F the final functional trust.
Definition 8
For n users, the direct referral trust matrix A is an \(n\times n\) matrix, where \(A_{xy} \in \varOmega \) (with \(x\ne y\)) is the direct referral trust that user x has in user y, and \(A_{xx} = (0,0,1)\) for all x.
Note that we impose the condition \(A_{xx} = (0,0,1)\) in order to prevent artifacts caused by selfrating [35].
In conclusion, (i) generic trust networks with several connections \(A_{ij}\ne U\) cannot be handled with SL because there is no canonical form for them that avoids double counting; (ii) even when there is a canonical result, this result cannot be reproduced by a straightforward recursive approach.
5 Subjective logic revisited
This section presents a new, fully evidencebased approach to SL. We refer to the resulting opinion algebra as EvidenceBased Subjective Logic or EBSL.
5.1 Excluding dogmatic opinions
As mentioned in Sect. 3.1, dogmatic opinions are problematic when the \(\oplus \) operation is applied to them. Furthermore, a dogmatic opinion corresponds to an infinite amount of evidence, which in our context is not realistic. In the remainder of this paper, we will exclude dogmatic opinions. We will work with a reduced opinion space defined as follows.
Definition 9
The opinion space excluding dogmatic opinions is denoted as \(\varOmega '\) and is defined as \(\varOmega '\mathop {=}\limits ^\mathrm{def}\{(b,d,u)\in [0,1)\times [0,1)\times (0,1] \; \; b+d+u=1\}\).
We are by no means the first to do this; in fact, the exclusion of dogmatic opinions was proposed as an option in the very early literature on SL [22].
5.2 The relation between evidence and opinions: a simplified justification
We make a short observation about the mapping between evidence and opinions. As was mentioned in Sect. 2.2, there is a onetoone mapping (2) based on the analysis of probability distributions (Beta distributions). Here, we show that there is a shortcut: The same mapping can also be obtained in a much simpler way, based on constraints.
Theorem 1
 1.
\(b/d=p/n\)
 2.
\(b+d+u=1\)
 3.
\(p+n=0\Rightarrow u=1\)
 4.
\(p+n\rightarrow \infty \Rightarrow u\rightarrow 0\)
Proof
Property 1 gives \((b,d)=(p,n)/f(p,n)\) where f is some function. Combined with property 2, we then have \(\frac{p+n}{f(p,n)}=1u\). Property 3 then gives \(\frac{0}{f(0,0)}=0\), while property 4 gives \(\lim _{p+n\rightarrow \infty } \frac{p+n}{f(p,n)}=1\). The latter yields \(f(p,n)=p+n+c\) where c is some constant. Allowing \(c<0\) would open the possibility of components of x being negative. Thus, we must have \(c\ge 0\). The requirement \(\frac{0}{f(0,0)}=0\) eliminates the possibility of having \(c=0\), since \(c=0\) would yield division by zero. Finally, setting \(u=c/(p+n+c)\) is necessary to satisfy property 2. \(\square \)
Theorem 1 shows that we can derive a formula similar to (2), based on minimal requirements which make intuitive sense. Only the constant c is not fixed by the imposed constraints; it has to be determined from the context. One can interpret c as a kind of soft threshold on the amount of evidence: Beyond this threshold one starts gaining enough confidence from the evidence to form an opinion.
We observe that (17) with its generic constant c is already sufficient to derive the consensus rule \(\oplus \), i.e., the consensus rule does not require \(c=2\).
Lemma 1
The mapping (17) with arbitrary c implies the consensus rule \(\oplus \) as specified in Definition 2.
Proof
Consider \(x=(b_1,d_1,u_1)=(p_1,n_1,c)/(p_1+n_1+c)\) and \(y=(b_2,d_2,u_2)=(p_2,n_2,c)/(p_2+n_2+c)\). An opinion formed from the combined evidence \((p_1+p_2,n_1+n_2)\) according to (17) is given by \((b,d,u)=\frac{(p_1+p_2,n_1+n_2,c)}{p_1+n_1+p_2+n_2+c}\). Substituting \(p_i=c \frac{b_i}{u_i}\) and \(n_i=c \frac{d_i}{u_i}\) yields, after some simplification, \((b,d,u)=\frac{(u_1 b_2+u_2 b_1,u_1 d_2+u_2 d_1,u_1 u_2)}{u_1+u_2u_1 u_2}\). \(\square \)
Theorem 1 and Lemma 1 improve our understanding of evidencebased opinion forming and of the consensus rule.
Furthermore, in (2) and (3), we can replace ‘2’ by c and the expectation of t, obtained by integrating the Beta distribution times t, is still \(x_\mathrm{b}+ax_\mathrm{u}\)! Therefore, in the remainder of the paper, we will work with a redefined version of the p(x) and n(x) functions (Sect. 2.2). The new version has a general value \(c>0\) instead of \(c=2\).
Definition 10
5.3 Scalar multiplication
Our next contribution has more impact. We define an operation on opinions that is equivalent to a scalar multiplication on the total amount of evidence.
Definition 11
Lemma 2
 1.
\(\alpha \cdot x\in \varOmega '\).
 2.
\(0\cdot x=U\).
 3.
\(1\cdot x=x\).
 4.
For \(n\in \mathbb N\), \(n\ge 2\), it holds that \(n\cdot x=\underbrace{x\oplus x\oplus \cdots \oplus x}_{n}\).
 5.
The evidence underlying \(\alpha \cdot x\) is \(\alpha \) times the evidence underlying x, i.e., \(p(\alpha \cdot x)=\alpha p(x)\) and \(n(\alpha \cdot x)=\alpha n(x)\).
 6.
If \(\alpha \ne 0\) then \((\alpha \cdot x)_\mathrm{b}/(\alpha \cdot x)_\mathrm{d}=x_\mathrm{b}/x_\mathrm{d}\).
Proof
Property 1: It is readily verified that the components add to 1, using \(x_\mathrm{b}+x_\mathrm{d}=1x_\mathrm{u}\). Since \(\alpha \ge 0\), all three components of \(\alpha \cdot x\) are nonnegative, since \(x\in \varOmega '\). Properties 2 and 3: Found directly by substituting \(\alpha =0\) resp. \(\alpha =1\) in (19). Property 4: Consider \(n=2\). Setting \(\alpha =2\) in (19) yields \(2\cdot x=\frac{(2x_\mathrm{b},2x_\mathrm{d},x_\mathrm{u})}{2x_\mathrm{u}}=\frac{(2x_\mathrm{u}x_\mathrm{b},2x_\mathrm{u}x_\mathrm{d},x_\mathrm{u}^2)}{2x_\mathrm{u}x_\mathrm{u}^2}\) \(=x\oplus x\). The rest follows by induction. Property 5: We use (17) to map between opinions and evidence. The positive evidence of x is \(p(x)=cx_\mathrm{b}/x_\mathrm{u}\). The positive evidence of \(\alpha \cdot x\) is \(p(\alpha \cdot x)\) \(=c\frac{\alpha x_\mathrm{b}}{\alpha (1x_\mathrm{u})+x_\mathrm{u}}/\frac{x_\mathrm{u}}{\alpha (1x_\mathrm{u})+x_\mathrm{u}}\) \(=\alpha cx_\mathrm{b}/x_\mathrm{u}=\alpha \, p(x)\). The proof for \(n(\alpha \cdot x)\) is analogous. Property 6: Follows directly by dividing the first and second component of \(\alpha \cdot x\). \(\square \)
Lemma 3
Proof
We make extensive use of \(\alpha \cdot x\propto (\alpha x_\mathrm{b},\alpha x_\mathrm{d},x_\mathrm{u})\). First part: On the one hand, \(\alpha \cdot (x\oplus y)\propto (\alpha [x_\mathrm{u}y_\mathrm{b}+y_\mathrm{u}x_\mathrm{b}],\alpha [x_\mathrm{u}y_\mathrm{d}+y_\mathrm{u}x_\mathrm{d}],x_\mathrm{u}y_\mathrm{u})\). We have not written the normalization factor; it is not necessary since we already know that the result is normalized. On the other hand, \((\alpha \cdot x)\oplus (\alpha \cdot y)\propto \) \((x_\mathrm{u}[\alpha y_\mathrm{b}]+y_\mathrm{u}[\alpha x_\mathrm{b}],x_\mathrm{u}[\alpha y_\mathrm{d}]+y_\mathrm{u}[\alpha x_\mathrm{d}],x_\mathrm{u}y_\mathrm{u})\).
Second part: On the one hand, \((\alpha +\beta )\cdot x\propto ([\alpha +\beta ]x_\mathrm{b},[\alpha +\beta ]x_\mathrm{d},x_\mathrm{u})\). On the other hand, \((\alpha \cdot x)\oplus (\beta \cdot x)\) \(\propto ([\beta \cdot x]_\mathrm{u}[\alpha \cdot x]_\mathrm{b}+[\alpha \cdot x]_\mathrm{u}[\beta \cdot x]_\mathrm{b}, [\beta \cdot x]_\mathrm{u}[\alpha \cdot x]_\mathrm{d}+[\alpha \cdot x]_\mathrm{u}[\beta \cdot x]_\mathrm{d}, [\alpha \cdot x]_\mathrm{u}[\beta \cdot x]_\mathrm{u})\) \(\propto (x_\mathrm{u}[\alpha x_\mathrm{b}]+x_\mathrm{u}[\beta x_\mathrm{b}], x_\mathrm{u}[\alpha x_\mathrm{d}]+x_\mathrm{u}[\beta x_\mathrm{d}], x_\mathrm{u}^2)\) \(\propto (\alpha x_\mathrm{b}+\beta x_\mathrm{b},\alpha x_\mathrm{d}+\beta x_\mathrm{d},x_\mathrm{u})\). \(\square \)
Lemma 4
Let \(x\in \varOmega '\) and \(\alpha ,\beta \ge 0\). Then, \(\alpha \cdot (\beta \cdot x)=(\alpha \beta )\cdot x\).
Proof
\(\alpha \cdot (\beta \cdot x)\propto \alpha \cdot (\beta x_\mathrm{b},\beta x_\mathrm{d},x_\mathrm{u})\) \(\propto (\alpha \beta x_\mathrm{b},\alpha \beta x_\mathrm{d},x_\mathrm{u})\).
\(\square \)
5.4 New discounting rule
We propose a new approach to discounting: Instead of multiplying (part of) the opinions, we multiply the evidence. The multiplication is done using our scalar multiplication rule (Definition 11). We return to the example where Alice has an opinion \(x\in \varOmega '\) about the trustworthiness of Bob, and Bob has an opinion \(y\in \varOmega '\) about some proposition P. We propose a discounting of the form \(g(x)\cdot y\), where \(g(x)\ge 0\) is a scalar that indicates which fraction of Bob’s evidence is accepted by Alice. One can visualize the discounting as a physical transfer of evidence from Bob to Alice, during which only a fraction g(x) survives, due to Alice’s mistrust and/or uncertainty. It is desirable to set g(x) in the range [0, 1]: allowing \(g(x)<0\) would lead to negative amounts of evidence (not to be confused with the term ‘negative evidence’ which is used for evidence that contradicts the proposition P); allowing \(g(x)>1\) would “amplify” evidence, i.e., create new evidence out of nothing, which is clearly unrealistic.
It makes intuitive sense to set \(\lim _{x\rightarrow B}g(x)=1\), \(\lim _{x\rightarrow D}g(x)=0\) and \(g(U)=0\), or even to set \(g(x)=\tilde{g}(x_\mathrm{b})\), i.e., a function of \(x_\mathrm{b}\) only, with \(\tilde{g}(0)=0\) and \(\tilde{g}(1)=1\). For instance, we could set \(g(x)=x_\mathrm{b}\).^{3} On the other hand, it could also make sense to set \(g(U)>0\), which would represent the “benefit of the doubt.” An intuitive choice would then be \(g(x)=x_\mathrm{b}+ax_\mathrm{u}\), i.e., the expectation value corresponding to x. We postpone the precise details of how the function g can/should be chosen, and introduce a very broad definition.
Definition 12
Differently from \(\otimes \), the operator \(\boxtimes \) has a welldefined interpretation in terms of evidence handling. The following theorem states that the evidence underlying \(x\boxtimes y\) is a fraction of the evidence underlying y defined by a scalar weight depending on x.
Theorem 2
 1.
\(x\boxtimes y\in \varOmega '\).
 2.
\(p(x\boxtimes y)=g(x)p(y)\) and \(n(x\boxtimes y)=g(x)n(y)\).
 3.
\((x\boxtimes y)_\mathrm{b}/(x\boxtimes y)_\mathrm{d}=y_\mathrm{b}/y_\mathrm{d}\).
 4.
\(x\boxtimes U=U\).
 5.
Discounting cannot decrease uncertainty, i.e., \((x\boxtimes y)_\mathrm{u}\ge y_\mathrm{u}\).
Proof
Property 1 follows from \(x\boxtimes y=g(x)\cdot y\) and the first property in Lemma 2. Property 2: We compute \(p(x\boxtimes y)=(x\boxtimes y)_\mathrm{b}/(x\boxtimes y)_\mathrm{u}\) using the definition (21), which yields \(g(x)y_\mathrm{b}/y_\mathrm{u}=g(x)p(y)\). For \(n(x\boxtimes y)\), the derivation is analogous. Property 3: Follows directly by dividing the belief and disbelief part of (21). Property 4: Follows by setting \(y_\mathrm{b}=0\), \(y_\mathrm{d}=0\) and \(y_\mathrm{u}=1\) in (21). Property 5: We have \((x\boxtimes y)_\mathrm{u}=\frac{y_\mathrm{u}}{(y_\mathrm{b}+y_\mathrm{d})g(x)+y_\mathrm{u}}\). Since \(y_\mathrm{b}+y_\mathrm{d}+y_\mathrm{u}=1\) and \(g(x)\in [0,1]\), the denominator of the fraction lies in the range \([y_\mathrm{u},1]\). \(\square \)
Corollary 1
We stress again that the whole ‘dogmatic’ line between B and D is not part of the opinion space \(\varOmega '\), so that we avoid having to deal with infinite amounts of evidence.
Theorem 3
There is no function \(g:\varOmega '\rightarrow [0,1]\) such that \(x\boxtimes y=x\otimes y\) for all \(x,y\in \varOmega '\).
Proof
On the one hand, we have \(x\otimes y=(x_\mathrm{b}y_\mathrm{b},x_\mathrm{b}y_\mathrm{d},1x_\mathrm{b}y_\mathrm{b}x_\mathrm{b}y_\mathrm{d})\). On the other hand, \(x\boxtimes y=\frac{(g(x)y_\mathrm{b},g(x)y_\mathrm{d},y_\mathrm{u})}{g(x)(y_\mathrm{b}+y_\mathrm{d})+y_\mathrm{u}}\). Demanding that they are equal yields, after some rewriting, \(g(x)=x_\mathrm{b}y_\mathrm{u}/[1x_\mathrm{b}(1y_\mathrm{u})]\). This requires g(x), which is a function of x only, to be a function of \(y_\mathrm{u}\) as well. \(\square \)
Being based on the scalar multiplication rule (and hence ultimately on the \(\oplus \) rule), our operation \(\boxtimes \) has several properties that \(\otimes \) lacks: (i) rightdistribution; (ii) permutation symmetry of parties that transfer evidence. This is demonstrated below.
Lemma 5
Proof
It follows trivially from \(x\boxtimes (y\oplus z)=g(x)\cdot (y\oplus z)\) and Lemma 3. \(\square \)
This distributive property resolves the issue discussed in Sect. 3.2: using the \(\boxtimes \) operator, it does not matter if y and z are combined before or after the discounting. This solves the inconsistency caused by the \(\otimes \) operation.
Notice also that the lefthand side of (23) obviously is not double counting x; hence, also the expression on the righthand side does not double count x. In contrast, the righthand side expression with \(\otimes \) instead of \(\boxtimes \) would be double counting. We come back to this point in Sect. 5.6.
Lemma 6
Proof
\(x_1\boxtimes (x_2\boxtimes y) =g(x_1)\cdot (g(x_2)\cdot y)\). Using Lemma 4, this reduces to \((g(x_1)g(x_2))\cdot y\). Exactly the same reduction applies to \(x_2\boxtimes (x_1\boxtimes y)\). \(\square \)
Also note that \(\boxtimes \) does not have a leftdistribution property for arbitrarily chosen g. It takes some effort to define a reasonable function g that yields leftdistributivity.
Lemma 7
There is no function \(g:\varOmega '\rightarrow [0,1]\) that satisfies \(\lim _{s\rightarrow B}g(s)=1\) and gives \((x\oplus y)\boxtimes z=(x\boxtimes z)\oplus (y\boxtimes z)\) for all \(x,y,z\in \varOmega '\).
Proof
We consider the limit \(x\rightarrow B,y\rightarrow B\). On the one hand, \((x\oplus y)\boxtimes z\rightarrow B\boxtimes z=z\). On the other hand, \((x\boxtimes z)\oplus (y\boxtimes z)\rightarrow z\oplus z\). \(\square \)
It may look surprising that we cannot achieve leftdistributivity with a function g chosen from a very large function space with only a single constraint. (And a very reasonablelooking constraint at that). But leftdistributivity requires \(g(x\oplus y)=g(x)+g(y)\), which conflicts with the constraint \(\lim _{s\rightarrow B} g(s)=1\).
5.5 New specific discounting rule
One way to satisfy \(g(x\oplus y)=g(x)+g(y)\) is by setting \(g(x)\propto p(x)\). This approach, however, causes some complications. Suppose we define \(g(x)=p(x)/\theta \), where \(\theta \) is some constant. If the amount of positive evidence ever exceeds \(\theta \), then the discounting factor becomes larger than 1, i.e., amplification instead of reduction, which is an undesirable property. If we redefine g such that factors larger than 1 are mapped back to 1, then we lose the distribution property. We conclude that the “g proportional to evidence” approach can only work if the maximum achievable amount of positive evidence in a given trust network can be upperbounded by \(\theta \).
Definition 13
We stress again that \(\theta \) depends on the interactions between entities within the system, i.e., on the structure of the trust network and the maximum amount of positive evidence in the network.
Lemma 8
Proof
The righthand side evaluates to \((x\odot z)\oplus (y\odot z)=[g(x)\cdot z]\oplus [g(y)\cdot z]\) \(=[g(x)+g(y)]\cdot z\). The gfunction in Definition 13 equals positive evidence divided by \(\theta \); hence, \(g(x)+g(y)=[p(x)+p(y)]/\theta \). The lefthand side evaluates to \(g(x\oplus y)\cdot z\) with \(g(x\oplus y)=\theta ^{1}p(x\oplus y)\) \(=[p(x)+p(y)]/\theta \). \(\square \)
Lemma 9
Proof
From (26), we see that \(x\odot (y\odot z) =[g(x)g(y)]\cdot z\), but now we have \(g(x)g(y)=\theta ^{2}p(x)\,p(y)\). From (26), we also see \((x\odot y)\odot z=g(g(x)\cdot y)\cdot z\), but now we have \(g(g(x)\cdot y)=\theta ^{1}p(\theta ^{1}p(x)\cdot y)\) \(=\theta ^{2}p(x)\,p(y)\). \(\square \)
We are not claiming that \(\odot \) is the proper discounting operation to use. It has the unpleasant property that the negative evidence underlying x is completely ignored in the computation of \(x\odot y\). A quick fix of the form \(g(x)\propto p(x)n(x)\) does not work since it can cause \(g(x)<0\) and therefore \(x\boxtimes y\notin \varOmega '\).
We note that there is no alternative gfunction to the ones discussed above if linearity of g is required. This is formalized in the following lemma.
Lemma 10
The property \(g(x\oplus y)=g(x)+g(y)\) can only be achieved by setting \(g(x)=\alpha p(x)+\beta n(x)\), where \(\alpha \) and \(\beta \) are scalars.
The proof is given in the “Appendix.” Note that, for sufficiently smooth g, it is possible to prove that the property \(g(x\oplus y)=g(x)+g(y)\) implies \(g(g(x)\cdot y)=g(x)g(y)\), i.e., associativity.
5.6 The new discounting rule avoids double counting
Lemma 11
Proof
The evidence underlying \(X_1\boxtimes Y\) is the evidence from Y, scalarmultiplied by \(g(X_1)\). Likewise, the evidence underlying \(X_2\boxtimes Z\) is a scalar multiple of (p(Z), n(Z)). Since Y and Z are independent, the evidence on the left and right side of the ‘\(\oplus \)’ in (31) is independent. \(\square \)
Lemma 11 still holds when \(X_1=X_2\). Based on the results above, we can conclude that:
Corollary 2
Transporting different pieces of evidence over the same link x with the \(\boxtimes \) operation is not double counting x.
Thus, many expressions that are problematic in SL become perfectly acceptable in EBSL, simply because \(\boxtimes \) is just an (attenuating) evidence transport operation, whereas SL’s \(\otimes \) is a very complicated thing that mixes evidence from its left and right operand (Eqs. 8 and 9).
6 Flowbased reputation with uncertainty
In this section, we will use the discounting rule \(\boxtimes \) without specifying the function g. We show that EBSL can be applied to arbitrarily connected trust networks and that the simple recursive approach (12), with \(\otimes \) replaced by \(\boxtimes \), yields consistent results that avoid the doublecounting problem.
6.1 Recursive solutions using EBSL

If \(g(A_{12})=0\) and \(g(U)=0\) then \(R_{12}=A_{12}\).

If \(A_{23}\ne U\), \(g(x)>0\) for \(x\ne U\), and \(A_{32}\rightarrow B\) then \(R_{12}\rightarrow B\). The direct \(A_{12}\) can be overwhelmed by the indirect \(A_{32}\), even if user 1 has little trust in user 2. This demonstrates the danger of allowing opinions close to full belief.

If \(A_{23}\rightarrow B\) and \(g(B)=1\) then \(R_{12}\rightarrow A_{12}\oplus A_{32}\).
6.2 Recursive solution in matrix form
 1.
Find the fixed point \(X_*\) satisfying \(f(X_*)=X_*\).
 2.
Take \(R=\mathrm{offdiag}\, X_*\).
At this point, two important questions have to be answered: (i) whether the recursive approach for (40) converges, i.e., if the fixed point exists, and (ii), when it converges, whether the fixed point solution is unique. When there are no loops in the network, then trivially we have convergence and the fixed point is unique. Intuitively, the repeated applications of f after the trust network has been completely explored do not propagate additional evidence.
In the case of general networks, the situation is more complicated. We can prove that there is no divergence. In every iteration of the mapping \(X\mapsto f(X)\), the new value of X is of the form \(X_{ij}=A_{ij}\oplus \bigoplus _k g((\mathrm{offdiag}\, X)_{ik})\cdot A_{kj}\). We see that the evidence in each \(A_{kj}\) gets multiplied by a scalar smaller than 1. Hence, no matter how many iterations are done, the amount of evidence about user j that is contained in X can never exceed the amount of evidence about j present in A. This puts a hard upper bound on the amount of evidence in \(\mathrm{offdiag}\, X\), which prevents the solution from ‘running off’ toward full belief. Hence, the evidence underlying \(R_{ij}\) cannot be greater of the total amount of evidence underlying the opinions in A about user j.
It can be observed that, being flowbased, our fixed point equation for the matrix R has the same form as the fixed point equation for a Markov Chain. The main difference is that in an ordinary Markov chain, there is a realvalued transition matrix whereas we have opinions \(A_{ij}\in \varOmega ^*\), and in our case multiplication of reals is replaced by \(\boxtimes \) and addition by \(\oplus \). In spite of these differences, we observe in our experiments that every type of behavior of Markov Chain flow also occurs for R. Indeed, experiments on real data show that we indeed have convergence (see Sect. 7). Moreover, for some finetuned instances of the Amatrix, which are exceedingly unlikely to occur naturally, oscillations can exist just like in Markov chains; after a number of iterations, the powers of A jump back and forth between two states. Just as in flowbased reputation (1), adding the direct opinion matrix A in each iteration dampens the oscillations and causes convergence.
6.3 Recursive solutions using the \(\odot \) discounting rule
7 Evaluation
We have implemented our flowbased reputation model with uncertainty and performed a number of experiments to evaluate the practical applicability and “accuracy” of the model using both synthetic and reallife data. Note that, while it is possible to define some limiting situations (synthetic data) in which a certain result is expected, in general numerical experiments cannot ‘prove’ which approach is right because there is no ‘ground truth’ solution to the reputation problem that we could compare against. The only thing that can be verified by numerics is: (i) Do the results make sense? (ii) is the method practical? Thus, we have used synthetic data to compare the accuracy of the opinions computed using different reputation models. On the other hand, we used reallife data to study the practical applicability.
Experiments for assessing the robustness of the reputation model against attacks such as slandering, selfpromotion and Sybil attacks [8, 13, 35], have not been considered in this work and are left for future work, as our goal here is the definition of the mathematical foundation for the development of reputation systems. A study of the robustness against attacks requires to consider many other aspects that are orthogonal to this work.
In the remainder of this section, first we briefly present the implementation; then, we report and discuss the results of the experiments.
7.1 Implementation
We have developed a tool in Python. It implements the procedure for computing the fixed point described in Sect. 6.2. All SL and EBSL computation rules presented in this paper have been implemented in a Python library.
Evidence, opinions and aggregate ratings for case C1
Source  Target  Evidence  Opinion  \(A_{xy}\) 

1  2  (400, 300)  (0.570, 0.427, 0.003)  0.571 
2  3  (10, 5)  (0.588, 0.294, 0.118)  0.667 
3  4  (500, 0)  (0.996, 0.000, 0.004)  1.0 
3  5  (500, 0)  (0.996, 0.000, 0.004)  1.0 
4  5  (500, 0)  (0.996, 0.000, 0.004)  1.0 
4  6  (500, 0)  (0.996, 0.000, 0.004)  1.0 
5  6  (500, 0)  (0.996, 0.000, 0.004)  1.0 
6  7  (5, 5)  (0.417, 0.417, 0.166)  0.5 
7  P  (10, 90)  (0.098, 0.882, 0.020)  0.1 
Comparison in terms of trust values \(r_{1P}\) and opinions \(F_{1P}\)
C1  C2  C3  

Flowbased  0.401  0.392  0.501 
FlowSL  (0.024, 0.220, 0.756)  (0.003, 0.246, 0.751)  (0.9993, 0.0000, 0.0007) 
SL (canonical form)  (0.014, 0.123, 0.863)  (0.002, 0.137, 0.861)  (0.9990, 0.0000, 0.0010) 
EBSL \((g(x)=x_b)\)  (0.095, 0.859, 0.046)  (0.011, 0.984, 0.005)  (0.9998, 0.0000, 0.0002) 
EBSL \((g(x)= \sqrt{x}_b)\)  (0.097, 0.873, 0.030)  (0.011, 0.986, 0.003)  (0.9998, 0.0000, 0.0002) 
EBSL \((\odot )\)  (0.000, 0.000, 1.000)  (0.000, 0.006, 0.994)  (0.9970, 0.0000, 0.0030) 
Comparison in terms of amount of evidence underlying opinion \(F_{1P}\)
C1  C2  C3  

FlowSL  (0.065, 0.582)  (0.007, 0.655)  (2763.7, 0) 
SL (canonical form)  (0.032, 0.284)  (0.004, 0.319)  (1999.16, 0) 
EBSL \((g(x)=x_b)\)  (4.166, 37.490)  (4.166, 374.901)  (9998, 0) 
EBSL \((g(x)=\sqrt{x}_b)\)  (6.455, 58.095)  (6.455, 580.946)  (9999, 0) 
EBSL \((\odot )\)  (0.000, 0.001)  (0.000, 0.011)  (781.25, 0) 
7.2 Synthetic data
We have conduced a number of experiments using synthetic data to analyze and compare the different approaches for trust computation. The goal of these experiments is to analyze the behavior of the reputation models in a number of limiting situations for which it is known a priori how the result should behave.
Experiment settings The experiments are based on the trust network in Fig. 2. We considered six approaches: (i) the flowbased method without uncertainty in Eq. (1); (ii) the flowbased SL approach presented in Sect. 3; (iii) SL in which the specification of the trust network is transformed to canonical form by removing the edge from 4 to 5 (i.e., \(A_{45}\) is set to U in Eq. (14)); (iv) EBSL with \(g(x)=x_b\); (v) EBSL with \(g(x)=\sqrt{x}_b\); and (vi) EBSL using the operator \(\odot \).
For the analysis, we consider three cases. Table 1 presents the evidence along with the derived opinions and aggregated ratings for our first case (C1). In this case, we use \(\theta =1000\) for EBSL using operator \(\odot \). In the second case (C2), we consider the same evidence except for the edge from 7 to P which is now (10, 900). The corresponding opinion and aggregate rating are \(T_{7P}=(0.011,0.987,0.002)\) and \(A_{7P}=0.011\), respectively. In the last case (C3), we consider the evidence for all edges to be (10000, 0) and we set \(\theta =20000\). In this case, all opinions are equal to (0.9998, 0.0000, 0.0002) and aggregated ratings are equal to 1.
Results The results of the trust computation are presented in Table 2 in terms of opinions (trust value for flowbased method), and in Table 3 in terms of amount of evidence. Note that in Table 3, we have not included the amount of evidence for the flowbased approach of (1) as it is not possible to reconstruct it from trust values.
The results confirm our expectation about the impact of the trust network representation on the trust computation when SL is used. As expected, the uncertainty component of opinion \(F_{1P}\) computed using SL is larger when the trust network in Fig. 2 is represented in canonical form. Indeed, in this case some trust information (i.e., evidence) is discarded (Recall that uncertainty depends on the amount of evidence; the larger the amount of evidence, the lower the uncertainty. See Sect. 5.2). In contrast, the representation of the trust network in Eq. (13) is affected by double counting, leading to more evidence and thus to a lower uncertainty component of \(F_{1P}\).
The results show that the SL and EBSL approaches preserve the ratio between belief and disbelief components (Table 2) and consequently the ratio between positive and negative evidence (Table 3). This ratio is close to the one between the positive and negative evidence underlying the functional trust \(T_{7P}\). If the amount of evidence increases (C2), one would expect that the amount of evidence underlying opinion \(F_{1P}\) increases proportionally to the increase in the amount of evidence underlying \(T_{7P}\) (Theorem 2). Accordingly, the amount of positive evidence underlying \(F_{1P}\) should be the same in C1 and C2, and the amount of negative evidence underlying \(F_{1P}\) in C2 should be 10 times the amount of negative evidence in C1. We can observe in Table 3 that this is true for EBSL but not for SL. This is explained by the fact that \(x \otimes y\) is not an xdependent multiple of the evidence underlying y, as was shown in Eqs. (8) and (9).
Finally, in the last case (C3), we have considered a limiting case where every trust relation in the network of Fig. 2 is characterized by a large amount of positive evidence. Here, one would expect that the opinion \(F_{1P}\) is close to (1, 0, 0) and the trust value \(r_{1P}\) close to 1. From Table 2, we can observe that SL and all EBSL approaches meet this expectation. However, if we look closely at the evidence underlying such an opinion (Table 3), we can observe that when the SL discounting operator \(\otimes \) and EBSL operator \(\odot \) are used, a large amount of evidence is “lost” on the way. In contrast, we expect the amount of evidence underlying \(F_{1P}\) to be close to that of \(T_{7P}\) (Theorem 2). Table 3 shows that EBSL, both for \(g(x)=x_b\) and \(g(x)=\sqrt{x}_b\), preserves the amount of evidence when referral trust relations are close to full belief.
Moreover, Table 2 shows that the value of \(r_{1P}\) is close to neutral trust rather than to full trust. This can be explained by Eq. (1) and the impossibility to express uncertainty. On the one hand, at each iteration Eq. (1) computes a weighted average of aggregated ratings where weights are equal to the trust a user places in the users providing recommendations. On the other hand, the flowbased approach does not distinguish between neutral trust (equal amount of positive and negative evidence) and full uncertainty (zero evidence). In particular, the lack of evidence between two users is represented in the matrix of aggregated ratings A as neutral trust (see Eq. 46). In sparse trust networks (i.e., networks with only a few edges) like the one in Fig. 2, the neutral trust used to express uncertainty has a significant impact on the weighted average used to compute trust values.^{5} These results demonstrate that the ability to express uncertainty is fundamental to capture the actual trustworthiness of a target, which is one of the main motivations for this work.
7.3 Reallife data
We performed a number of experiments using reallife data to assess the practical applicability of EBSL and flowbased reputation models built on top of EBSL. In particular, we study the impact of various discounting operators on the propagation of evidence and the convergence speed of the iterative procedure.
Experiment Settings For the experiments, we used a dataset derived from a BitTorrentbased client called Tribler [29]. The dataset consists of information about 10,364 nodes and 44,796 interactions between the nodes. Each interaction describes the amount of transferred data in bytes from a node to another node. The amount of transferred data can be either negative, indicating an upload from the first node to the second node, or positive, indicating a download.
To provide an incentive for sharing information, some BitTorrent systems require users to have at least a certain ratio of uploaded vs. downloaded data. Along this line, we treat the amount of data uploaded by a user as positive evidence, and the downloaded amount as negative evidence. Intuitively, positive evidence indicates a user’s inclination to share data and thus to contribute to the community.
It is worth noting that Tribler has a high population turnover and, thus, the dataset contains very few long living and active nodes alongside many loosely connected nodes of low activity [9]. This results in a direct referral trust matrix that is very sparse (i.e., most opinions are full uncertainty). In this sparse form, it is inefficient to do large matrix multiplications. To this end, we have grouped the nodes into 200 clusters, each of which contains about 50 nodes. Intuitively, a cluster may be seen as the set of nodes under the control of a single user.
In all four models, we computed the final referral trust matrix R. The amount of evidence in the matrix A is visualized in Fig. 4. Figure 4a presents the amount of positive evidence, and Fig. 4b the total amount of evidence (sum of positive and negative). We can observe the presence of a few active users who had interactions with a lot of other users (visible as dark lines). A horizontal dark line in Fig. 4a indicates a user who downloaded data from many other users. The vertical dark lines in Fig. 4b represent negative evidence: many users uploading to the same few users. Note that Fig. 4b is not symmetric, since an interaction never results in user feedback from both sides.
Results The amount of evidence in R is presented in Fig. 5 (only positive evidence) and in Fig. 6 (sum of positive and negative evidence). For most users, the amount of evidence in R has increased compared to the initial situation (Fig. 4a, b). The plots are characterized by uniform vertical stripes, indicating that (most) users have approximately the same amount of (positive) evidence about a given user. The amount of evidence, however, remains close to 0 for those users who had very few interactions with other users (horizontal white lines in Figs. 5, 6). It is also worth noting that the diagonal of R is clearly recognizable as a white line. This is due to the fact we impose the diagonal to be full uncertainty, i.e., users cannot have an opinion about themselves, to reduce the effect of selfpromoting.
The choice of the discounting operator, which defines how evidence is propagated, has a significant impact on the amount of evidence in R. Ideally, users should be able to use the available trust information to decide whether to engage an interaction with another user [42]. Therefore, a reputation system should allows users to gather as many recommendations (i.e., evidence) as possible from trusted users. However, the use of the \(\odot \) operator causes most of the evidence to be lost along the way. This can be clearly understood by observing that the initial situation in Fig. 4a (Fig. 4b, respectively) and the final referral trust matrix R in Fig. 5a (Fig. 6a respectively) are almost the same. Figures 5b and 6b show that the \(\otimes \) operator propagates more evidence than \(\odot \). We remind the reader that \(\otimes \) causes double counting as well as discarding of evidence as shown in Examples 2 and 3. The \(\boxtimes \) operator with both \(g(x)=x_\mathrm{b}\) and \(g(x)=\sqrt{x_\mathrm{b}}\) results in the propagation of more evidence compared to the \(\odot \) and \(\otimes \) operators, as shown in Fig. 5c, d (positive evidence) and in Fig. 6c, d (total evidence). These findings confirm the results obtained with the synthetic data (Table 3). Therefore, we conclude that the \(\boxtimes \) operator is preferable to the other operators.
Convergence We have analyzed the convergence of the iterative approach using the Tribler dataset. The experiments show that the reputation models built on top of EBSL converge. In particular, EBSL with \(g(x)=x_\mathrm{b}\) converges after 47 iterations, EBSL with \(g(x)=\sqrt{x}_\mathrm{b}\) converges after 24 iterations, and EBSL using the \(\odot \) operator after nine iterations. One can observe that, in all the cases, convergence is fast. Accordingly, we believe that the proposed reputation model can handle real scenarios. In the experiments, we also analyzed the convergence of the naïve approach that combines flowbased reputation and SL, as presented in Sect. 3. Here, convergence is not reached in a reasonable amount of time: After 1000 iterations, we still have \(\sum _{i,j}\delta (R^{(k+1)}_{ij},R^{(k)}_{ij})\approx 10^{8}\).
To study the link between our approach and Markov chains, we performed additional experiments (not reported here) with a number of limiting situations, i.e., synthetic data unlikely to occur in real life. In particular, we studied the EBSL case where the powers of A show oscillations, i.e., \(A^{k+2}=A^k\) with \(A^{k+1}\ne A^k\). Here, \(A^k\) stands for \(((\cdots \boxtimes A)\boxtimes A)\boxtimes A\). This can occur in Markov chains too. In flowbased reputation (1), the added term \((1\alpha )s_x\) dampens the oscillations and thus improves convergence. Similarly, in our EBSL experiments, the added term A in each iteration (40) gives a convergent result in spite of the oscillatory nature of A. This strengthens our conviction that EBSL correctly captures the idea of reputation flow.
8 Related work
The notion of uncertainty is becoming an important concept in reputation systems and, more in general, in data fusion [5, 24]. Uncertainty has been proposed as a quantitative measure of the accuracy of predicted beliefs and it is used to represent the level of confidence in the fusion result. Several approaches have extended reputation systems with the notion of uncertainty [15, 30, 32, 39]. For instance, Reis et al. [32] associate a parameter with opinions to indicate the degree of certainty to which the average rating is assumed to be representative for the future. Teacy et al. [39] account for uncertainty by assessing the reputation of information sources based on the perceived accuracy of past opinions. Differently from the previous approaches, subjective logic [15] considers uncertainty as a dimension orthogonal to belief and disbelief, which is based on the amount of available evidence.
One of the main challenges in reputation systems is how to aggregate opinions, especially in the presence of uncertainty. SL provides two main operators for aggregating opinions: consensus and discounting (see Sect. 2.2). Many studies have analyzed strategies for combining conflicting beliefs [16, 19, 37] and have proposed new combining strategies and operators [7, 44]. In Sect. 5.2, we reconfirm that the standard consensus operator used in SL is wellfounded on the theory of evidence.
In contrast, less effort has been devoted to studying the discounting operator. Bhuiyan and Jøsang [4] propose two alternative discounting operators: an operator based on opposite belief favouring, for which the combination of two disbeliefs results in belief, and a base rate sensitive transitivity operator in which the trust in the recommender is a function of the base rate. Similarly to the traditional discounting operator of SL, these operators are founded on probability theory. As shown in Sect. 3, employing operators founded on different theories has the disadvantage that these operators may not “cooperate.” In the case of SL, this lack of cooperation results in the inability to apply SL to arbitrary trust networks. In particular, trust networks have to be expressed in a canonical form in which edges are not repeated. A possible strategy to reduce a trust network to a canonical form is to remove the weakest edges (i.e., the least certain paths) until the network can be expressed in canonical form [20]. This, however, has the disadvantage that some (possibly even much) trust information is discarded. An alternative canonicalization method called edge splitting was presented in [18]. The basic idea of this method is to split a dependent edge into a number of different edges equal to the number of different instances of the edge in the network expression. Nonetheless, the method requires that the trust network is acyclic; if a loop occurs in the trust network, some edges have to be removed in order to eliminate the loop, thus discarding trust information. In contrast, we have constructed a discounting operator founded on the theory of evidence. This operator together with the consensus operator allows the computation of reputation for arbitrary trust networks, which can include loops, without the need to discard any information.
Cerutti et al. [7] define three requirements for discounting based on the intuitive understanding of a few scenarios: Let A be x’s opinion about y’s trustworthiness, C the level of certainty that y has about a proposition P, and \(F= A \circ C\) the (indirect) opinion that x has about P. (i) If C is pure belief, then \(F=A\); (ii) If C is complete uncertainty, then \(F=C\); (iii) The belief part of F is always less than or equal to the belief part of A. Based on these requirements, they propose a family of graphical discounting operators which, given two opinions, project one opinion into the admissible space of opinions given by the other opinion. These operators are founded on geometric properties of the opinion space. This makes it difficult to determine whether the resulting theory is consistent with the theory of evidence or probability theory. Our discounting operator satisfies requirement (ii) above, but not requirements (i) and (iii); indeed, for \(g(A)>0\) it holds that \(A\boxtimes B = B\) (where B represents full belief). It is worth noting that the requirements proposed in [7] are not wellfounded in the theory of evidence: B means that there is an infinite amount of positive evidence; discounting an infinite amount of evidence still gives an infinite amount of evidence. In Theorem 1, we provided a number of desirable properties founded on the theory of evidence. In particular, if \(p+n\rightarrow \infty \) then \(u\rightarrow 0\). Accordingly, if \(C=B\) the uncertainty component of F should be equal to 0, regardless of the precise (nonzero) value of the uncertainty component of A.
To our knowledge, our proposal is the first work that integrates uncertainty into flowbased reputation.
9 Conclusion
In this paper, we have presented a flowbased reputation model with uncertainty that allows the construction of an automated reputation assessment procedure for arbitrary trust networks. We illustrated and discussed the limitations of a naïve approach to combine flowbased reputation and SL. An analysis of SL shows that the problem is rooted in the lack of “cooperation” between the SL consensus and discounting rules due to the different nature of these two operators. In order to solve this problem, we have revised SL by introducing a scalar multiplication operator and a new discounting rule based on the flow of evidence. We refer to the new opinion algebra as EvidenceBased Subjective Logic (EBSL).
A generic definition of discounting (the operator \(\boxtimes \)) lacks the associative property satisfied by the SL operator \(\otimes \). This, however, is not problematic since the flow of evidence has a welldefined direction. Furthermore, the operator \(\boxtimes \) has rightdistributivity, a property that one would intuitively expect of opinion discounting. One can choose a specific discounting function g(x) proportional to the amount of positive evidence in x. The resulting discounting operator is denoted as \(\odot \). As shown in Table 4, this operator is completely linear (associative as well as left and right distributive). However, it has potentially undesirable behavior since it ignores negative evidence, and requires a carefully chosen system parameter related to the maximum amount of positive evidence in the system.
Comparison of the operators \(\otimes \), \(\boxtimes \), and \(\odot \)
\(\otimes \)  \(\boxtimes \)  \(\odot \)  

Associativity  Yes  No  Yes 
Leftdistribution  No  No  Yes 
Rightdistribution  No  Yes  Yes 
Recursive solutions  No  Yes  Yes 
The work presented in the paper poses the basis for several directions of future work. We have shown how EBSL can be used to build a flowbased reputation model with uncertainty. However, several reputation models have been proposed in the literature to compute reputation over a trust network. An interesting direction is to study the applicability of EBSL as a mathematical foundation for these models. This will also make it possible to study the impact of uncertainty on the robustness of reputation systems against attacks such as selfpromotion, slandering and Sybil attacks.
Footnotes
 1.
Note that it is entirely possible for two independent random variables \(X,Y\in \varOmega \) to have the same numerical value \(x=y\) by accident. In that case, \(x\oplus y\) is not double counting.
 2.
We consider only scenarios where all entities publish their opinions. If opinions are not published but communicated solely over the network links, then a recursive equation containing \(A\otimes R\) (the opposite order of \(R\otimes A\)) applies.
 3.
 4.
An even more compact formulation is possible if one is willing to temporarily use the full belief B in the computations. Let ‘\(\mathbf{1}\)’ be the diagonal matrix. Let \(Z=B\mathbf{1}+R\). Solving (32) for R is equivalent to solving \(Z=B\mathbf{1}\oplus (Z\boxtimes A)\) for Z.
 5.
It worth noting that a trust value close to 1 can be obtained by Eq. 1 only if the trust network is a complete graph in which the aggregate rating associated with each edge is close to 1.
Notes
Acknowledgments
This study was funded by Dutch national program COMMIT under the THeCS project (Grant No. P15). This study was funded by ITEA2 under the FedSS project (Grant No. 11009). This study was funded by EDA under the IN4STARS 2.0 project (Grant No. B 0983 IAP4 GP).
Compliance with ethical standards
Conflicts of interest
The authors declare that they have no conflict of interest.
Research involving human participants and/or animals
This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent
This article does not contain any identifiable personal data.
References
 1.AbdulRahman, A., Hailes, S.: A distributed trust model. In: Proceedings of the 1997 Workshop on New Security Paradigms, pp. 48–60. ACM (1997)Google Scholar
 2.Asnar, Y., Zannone, N.: Perceived risk assessment. In: Proceedings of the 4th ACM Workshop on Quality of Protection, pp. 59–64. ACM (2008)Google Scholar
 3.Bharadwaj, K., AlShamri, M.: Fuzzy computational models for trust and reputation systems. Electron. Comm. Res. Appl. 8(1), 37–47 (2009)CrossRefGoogle Scholar
 4.Bhuiyan, T., Jøsang, A.: Analysing trust transitivity and the effects of unknown dependence. Int. J. Eng. Bus. Manag. 2(1), 23–28 (2010)Google Scholar
 5.Bleiholder, J., Naumann, F.: Data fusion. ACM Comput. Surv. 41(1), 1:1–1:41 (2009)Google Scholar
 6.Brin, S., Page, L.: The anatomy of a largescale hypertextual Web search engine. Comput. Netw. ISDN Syst. 30(1–7), 107–117 (1998)CrossRefGoogle Scholar
 7.Cerutti, F., Toniolo, A., Oren, N., Norman, T.J.: Subjective logic operators in trust assessment: an empirical study. arXiv:1312.4828 (2013)
 8.Douceur, J.R.: The sybil attack. In: Proceedings of International Workshop on PeerToPeer Systems, LNCS 2429, pp. 251–260. Springer, Berlin (2002)Google Scholar
 9.Gkorou, D., Vinkó, T., Pouwelse, J.A., Epema, D.H.J.: Leveraging node properties in random walks for robust reputations in decentralized networks. In: Proceedings of International Conference on PeertoPeer Computing, pp. 1–10. IEEE (2013)Google Scholar
 10.Govindan, K., Mohapatra, P.: Trust computations and trust dynamics in mobile adhoc networks: a survey. IEEE Communications Surveys Tutorials 14(2), 279–298 (2012)CrossRefGoogle Scholar
 11.Grandison, T., Sloman, M.: A survey of trust in internet applications. Commun. Surv. Tuts. 3(4), 2–16 (2000)CrossRefGoogle Scholar
 12.Heuser, H.: Funktionalanalysis: Theorie und Anwendung, 4th edn. B.G. Teubner Verlag, Berlin (2006)CrossRefzbMATHGoogle Scholar
 13.Hoffman, K., Zage, D., NitaRotaru, C.: A survey of attack and defense techniques for reputation systems. ACM Comput. surv. 42, 1:1–1:31 (2009)CrossRefGoogle Scholar
 14.Huynh, T.D., Jennings, N.R., Shadbolt, N.R.: An integrated trust and reputation model for open multiagent systems. AAMAS 13(2), 119–154 (2006)Google Scholar
 15.Jøsang, A.: A logic for uncertain probabilities. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 9(3), 279–311 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Jøsang, A.: The consensus operator for combining beliefs. Artif. Intell. 141(12), 157–170 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
 17.Jøsang, A., Azderska, T., Marsh, S.: Trust transitivity and conditional belief reasoning. In: Dimitrakos, T., Moona, R., Patel, D., McKnight, D.H. (eds.) Trust Management VI, IFIP Advances in Information and Communication Technology, vol. 374, pp. 68–83. Springer, Berlin (2012)Google Scholar
 18.Jøsang, A., Bhuiyan, T.: Optimal trust network analysis with subjective logic. In: Proceedings of 2nd International Conference on Emerging Security Information, Systems and Technologies, pp. 179–184. IEEE (2008)Google Scholar
 19.Jøsang, A., Daniel, M., Vannoorenberghe, P.: Strategies for combining conflicting dogmatic beliefs. In: Proceedings of the 6th International Conference on Information Fusion (2003)Google Scholar
 20.Jøsang, A., Gray, E., Kinateder, M.: Simplification and analysis of transitive trust networks. Web Intell. Agent Syst. 4, 139–161 (2006)Google Scholar
 21.Jøsang, A., Haller, J.: Dirichlet reputation systems. In: Proceedings of International Conference on Availability, Reliability and Security, pp. 112–119. IEEE (2007)Google Scholar
 22.Jøsang, A., Hayward, R., Pope, S.: Trust network analysis with subjective logic. In: Proceedings of the 29th Australasian Computer Science Conference, pp. 85–94. Australian Computer Society, Inc. (2006)Google Scholar
 23.Kamvar, S., Schlosser, M., GarciaMolina, H.: The EigenTrust algorithm for reputation management in P2P networks. In: Proceedings of the 12th International Conference on World Wide Web, pp. 640–651. ACM (2003)Google Scholar
 24.Keenan, T., Carbone, M., Reichstein, M., Richardson, A.: The modeldata fusion pitfall: assuming certainty in an uncertain world. Oecologia 167(3), 587–597 (2011)CrossRefGoogle Scholar
 25.Lempel, R., Moran, S.: The stochastic approach for link–structure analysis (SALSA) and the TKC effect. Comput. Netw. 33(1), 387–401 (2000)CrossRefGoogle Scholar
 26.Lin, C., Varadharajan, V.: MobileTrust: a trust enhanced security architecture for mobile agent systems. Int. J. Inf. Secur. 9(3), 153–178 (2010)CrossRefGoogle Scholar
 27.Liu, L., Munro, M.: Systematic analysis of centralized online reputation systems. Decis. Support Syst. 52(2), 438–449 (2012)CrossRefGoogle Scholar
 28.Muller, T., Schweitzer, P.: On beta models with trust chains. In: Trust Management VII, IFIP AICT 401, pp. 49–65. Springer, Berlin (2013)Google Scholar
 29.Pouwelse, J.A., Garbacki, P., Wang, J., Bakker, A., Yang, J., Iosup, A., Epema, D.H.J., Reinders, M., van Steen, M.R., Sips, H.J.: TRIBLER: a socialbased peertopeer system. Concurr. Comput. Pract. Exp. 20(2), 127–138 (2008)Google Scholar
 30.Regan, K., Cohen, R., Poupart, P.: The advisor–POMDP: a principled approach to trust through reputation in electronic markets. In: Proceedings of 3rd Annual Conference on Privacy, Security and Trust (2005)Google Scholar
 31.Richters, O., Peixoto, T.P.: Trust transitivity in social networks. PLoS ONE 6(4) (2011)Google Scholar
 32.Ries, S., Habib, S.M., Mühlhäuser, M., Varadharajan, V.: CertainLogic: a logic for modeling trust and uncertainty. In: Trust and Trustworthy Computing, LNCS 6740, pp. 254–261. Springer, Berlin (2011)Google Scholar
 33.Sentz, K., Ferson, S.: Combination of evidence in Dempster–Shafer theory. Technical report, Sandia National Laboratories, Albuquerque, New Mexico (2002)Google Scholar
 34.Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976)zbMATHGoogle Scholar
 35.Simone, A., Škorić, B., Zannone, N.: Flowbased reputation: more than just ranking. Int. J. Inf. Technol. Decis. Mak. 11(3), 551–578 (2012)CrossRefGoogle Scholar
 36.Smeltzer, L.R.: The meaning and origin of trust in buyer–supplier relationships. J. Supply Chain Manag. 33(1), 40–48 (1997)Google Scholar
 37.Smets, P.: Analyzing the combination of conflicting belief functions. Inf. Fusion 8(4), 387–412 (2007)MathSciNetCrossRefGoogle Scholar
 38.Song, S., Hwang, K., Zhou, R., Kwok, Y.: Trusted P2P transactions with fuzzy reputation aggregation. Internet Comput. 9(6), 24–34 (2005)CrossRefGoogle Scholar
 39.Teacy, W.T.L., Patel, J., Jennings, N.R., Luck, M.: TRAVOS: trust and reputation in the context of inaccurate information sources. Auton. Agent. MultiAgent Syst. 12(2), 183–198 (2006)CrossRefGoogle Scholar
 40.Trivellato, D., Zannone, N., Etalle, S.: GEM: a distributed goal evaluation algorithm for trust management. TPLP 14(3), 293–337 (2014)MathSciNetGoogle Scholar
 41.Vavilis, S., Petković, M., Zannone, N.: Impact of ICT on home healthcare. In: ICT Critical Infrastructures and Society, IFIP AICT 386, pp. 111–122. Springer, Berlin (2012)Google Scholar
 42.Vavilis, S., Petković, M., Zannone, N.: A reference model for reputation systems. Decis. Support Syst. 61, 147–154 (2014)CrossRefGoogle Scholar
 43.Xiong, L., Liu, L.: PeerTrust: supporting reputationbased trust for peertopeer electronic communities. IEEE Trans. Knowl. Data Eng. 16(7), 843–857 (2004)CrossRefGoogle Scholar
 44.Zhou, H., Shi, W., Liang, Z., Liang, B.: Using new fusion operations to improve trust expressiveness of subjective logic. Wuhan Univ. J. Nat. Sci. 16(5), 376–382 (2011)MathSciNetCrossRefGoogle Scholar
 45.Zomlot, L., Sundaramurthy, S.C., Luo, K., Ou, X., Rajagopalan, S.R.: Prioritizing intrusion analysis using Dempster–Shafer theory. In: Proceedings of 4th ACM workshop on security and artificial intelligence, pp. 59–70. ACM (2011)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.