Multinomial processing tree (MPT) models (for a review, see Erdfelder et al., 2009) are stochastic models that are based on assumptions about the interplay of various latent cognitive processes that presumably underlie an observed behavioral response in a particular experimental paradigm (e.g., an “old” response in a recognition test). The assumed interaction of the proposed underlying processes (e.g., memory processes and guessing processes) can be illustrated as a processing tree. A processing tree links a particular test stimulus with all possible response options to this stimulus by specifying different sequences of cognitive processes that may lead to the behavioral responses. The probabilities of relevant cognitive processes taking place or not are formalized as model parameters that are bound to individual branches of the processing tree and reflect transition probabilities between cognitive states.
The feedback memory model described in the main text—an adapted version of the three sources model of Riefer et al. (1994)—distinguishes four different stimulus types in the memory test: statements with “true” feedback, statements with “false” feedback, statements with “uncertain” feedback, and “new” statements. For each of the four stimulus types, a separate processing tree is proposed, each making specific assumptions about the possible cognitive processes that may lead to the four response options “true,” “false,” “uncertain,” and “new.” Four stimulus types times four response options result in overall 16 different possible outcome events that form the empirical basis of the MPT model. These events are determined by memory and guessing processes represented by the following model parameters: D (probability of statement recognition or lure detection, respectively), d (probability of feedback recognition), b (probability of guessing “old” in case of recognition uncertainty), ai (probability of guessing feedback i for recognized statements), and gi (probability of guessing feedback i for unrecognized statements).
In the following, we will explain the MPT model illustrated in Fig. 1 in the main text using the example of a statement with “true” feedback. When presented with such a statement in the memory test, a participant either recognizes this statement as old with probability Dtrue or does not recognize the statement with the complementary probability 1 − Dtrue. If the statement is recognized, the feedback “true” may also be recognized with probability dtrue or it is not recognized with the complementary probability 1 − dtrue. If both the statement and the feedback is recognized, the participant responds “true” at test accordingly. If the statement is recognized but the feedback is not, the participant must guess the truth-value feedback presented along with the statement. Specifically, the participant guesses with probability atrue that the statement was presented as “true,” with probability afalse that the statement was presented as “false,” and with probability auncert. that the statement was presented as “uncertain.” If the statement is not recognized, the participant is in a state of recognition uncertainty. In this case, the participant either guesses with probability b that the statement is old or with probability 1 − b that the statement is new. In the latter case, the participant responds “new.” However, in case of an “old” guess, the feedback information has to be guessed as well. Hence, the participant guesses “true” with probability gtrue, “false” with probability gfalse, and “uncertain” with probability guncert.
The same logic also applies to statements with “false” feedback and those with “uncertain” feedback. For the “new” statements (i.e., the lures in the memory test), the Dnew parameter does not reflect statement recognition, but lure detection—that is, the probability to detect that a new statement was not presented in the study phase. In contrast, 1 − Dnew reflects the probability that a lure remains undetected. In this case, participants are in the state of uncertainty, which again leads to the same guessing processes as recognition uncertainty for old target statements. By implication, because new statements were not presented in the study phase—and thus did not receive truth-value feedback—the processing tree for the new statements does not include a feedback memory parameter (d).
The MPT model illustrated in Fig. 1
can be translated into model equations that specify the probabilities of the 16 outcome events in the memory test as a function of the model’s parameters. For instance, p
(“true”/false) denotes the probability of responding “true” to a statement presented with false
feedback. This leads to the following 16 model equations (note that parameter indices are abbreviated below; t = true, f = false, u = uncert., n = new):
p(“true”/true) = Dt × dt + Dt × (1 − dt) × at + (1 − Dt) × b × gt
p(“false”/true) = Dt × (1 − dt) × af + (1 − Dt) × b × gf
p(“uncert.”/true) = Dt × (1 − dt) × au. + (1 − Dt) × b × gu
p(“new”/true) = (1 − Dt) × (1 − b)
p(“true”/false) = Df × (1 − df) × at + (1 − Df) × b × gt
p(“false”/false) = Df × df + Df × (1−df)×af+(1−Df)×b×gf
p(“uncert.”/false) = Df × (1 − df) × au + (1 − Df) × b × gu
p(“new”/false) = (1 − Df) × (1 − b)
p(“true”/uncert.) = Du × (1 − du) × at + (1 − Du) × b × gt
p(“false”/uncert.) = Du × (1 − du) × af + (1 − Du) × b × gf
p(“uncert.”/uncert.) = Du × du + Du × (1 − du) × au + (1 − Du) × b × gu
p(“new”/uncert.) = (1 − Du) × (1 − b)
p(“true”/new) = (1 − Dn) × b × gt
p(“false”/new) = (1 − Dn) × b × gf
p(“uncert.”/new) = (1 − Dn) × b × gu
p(“new”/new) = Dn + (1 − Dn) × (1 − b)
With these model equations, it is possible to estimate the model’s parameters for a given data set of response frequencies by means of an iterative maximum likelihood estimation algorithm (EM algorithm; Hu & Batchelder, 1994), provided that the model is identifiable. Bayen et al. (1996) noted that two-high-threshold MPT models (such as the MPT model described above) are not identifiable without restricting the value of the Dnew-parameter. For this reason, they suggested to equate Dnew with at least one of the other D-parameters. This parameter restriction is in line with the empirically well-established mirror effect—the symmetrical increase (or decrease) of hits and correct rejections (Glanzer et al., 1993). Indeed, a validation study by Bayen et al. (1996) provided convincing evidence for the superiority of the two-high-threshold model with restricted Dnew-parameter compared with one-high-threshold and low-threshold models. In our two experiments, we restricted the value of the Dnew-parameter to the value of the Duncert.-parameter within each group (see Bell et al., 2010, for an equivalent restriction). Importantly, this restriction did not result in model misfit.
Model fit of MPT models can be assessed by means of the goodness-of-fit statistic G2, which is asymptotically χ2-distributed if the model holds (Read & Cressie, 1988). The G2-test compares the model’s predicted response frequencies with the observed response frequencies. The degrees of freedom for this test correspond to the number of independent outcome events minus the number of freely estimated parameters. Significant discrepancies between the predicted and observed frequencies indicate model misfit. However, model fit is not the only criterion for the evaluation of a model’s validity. A valid MPT model is characterized by the fact that each of its parameters responds to targeted experimental manipulations in a predictable manner. This criterion is met by source-monitoring MPT models—such as the model described above and used in our experiments—as has been successfully demonstrated in several validation studies (Bayen et al., 1996; Riefer et al., 1994). Most importantly for our purposes, the d-parameter of such models is a much more accurate measure of source memory (or feedback memory, respectively) than proxy measures such as SIM and CSIM, which may be confounded by item memory and guessing processes (for a review, see Bröder & Meiser, 2007).