Abstract
We model consumer journeys for user-created programs published in an online programming platform (OPP) and uncover factors that predict their occurrence. We build our model on a theoretical framework where consumer journeys involve three latent stages (Learn, Feel, Do), in which users gather information about, express fondness toward, and try the published items, respectively. Using a dataset from an OPP where users publish multimedia items and follow other users, we find that there is no one dominant consumer journey; instead, the sequences of stages in a journey (e.g., Learn → Feel → Do) vary across published items. Furthermore, we find that the social capital (i.e., social network) of a publisher influences the occurrence of spillover effects between latent stages (the phenomenon that one stage in a period triggers another stage in the next period) for the items posted by the publisher. We also find that a publisher’s social capital has only a transient impact on the consumer journeys for the publisher’s projects, underlining the importance of consistently making new network connections in order to promote the growth of user activities surrounding the publisher’s projects. We apply our findings to the publishers’ networking investment decisions to show that publishers’ networking investment would be severely suboptimal if journey heterogeneity is not considered.
Similar content being viewed by others
Notes
A platform that allows n user activities has n! different linear journeys. For instance, with 5 activities there are 5 ! = 120 journeys. Platforms with just 6, 7, and 8 user activities would theoretically have 720, 5040, and 40,320 journeys, respectively. As a result, one would need methods to reduce the dimensions of the journey “space.”
The comment activity might be assigned to the Feel stage given there is a research stream on management responses to consumer reviews (e.g., Proserpio and Zervas 2017). However, we chose to assign the comment activity to the Learn stage in an attempt to capture a proxy for customer learning rather than customer engagement. This assignment is supported by the fact that most comments on our platform were unidirectional from the users who encountered the projects to the projects’ publishers and that the majority of comments have neutral sentiments. Even in the rare cases where the publishers replied to the user comments, the replies were mostly about the projects (i.e., discussing the projects) rather than about building relationships. As such, it seems reasonable to assign the comment activity to the Learn stage. We thank the anonymous reviewer who brought up this point.
As Figure 1 shows, there are high correlations among a user’s network properties over time. The resulting multicollinearity inhibits us from including all the available network properties in the model. We use degree for its intuitive appeal (which helps us interpret the estimation results) and based on prior research that shows the effect of degree on the product adoption curve (e.g., Dover et al. 2012).
For the complete estimation results, please contact the corresponding author.
References
Ansari, A., Sthal, F., Heitmann, M., & Bremer, L. (2018). Building a social network for success. Journal of Marketing Research, 55(3), 321–338.
Bornstein, R. F. (1989). Exposure and affect: Overview and meta-analysis of research, 1968-1987. Psychological Bulletin, 106(2), 265–289.
Bruce, N. I., Peters, K., & Naik, P. A. (2012). Discovering how advertising grows sales and builds brands. Journal of Marketing Research, 49(6), 793–806.
Burt, R. S. (1997). The contingent value of social capital. Administrative Science Quarterly, 42(2), 339–365.
Carter, C. K., & Kohn, R. (1994). On Gibbs sampling for state space models. Biometrika, 81(3), 541–553.
Dover, Y., Goldenberg, J., & Shapira, D. (2012). Network traces on penetration: Uncovering degree distribution from adoption data. Marketing Science, 31(4), 689–712.
Fruhwirth-Schnatter, S. (1994). Data augmentation and dynamic linear models. Time Series Analysis, 15(2), 183–202.
Goldenberg, J., Han, S., Lehmann, D. R., & Hong, J. W. (2009). The role of hubs in the adoption process. Journal of Marketing, 73(2), 1–13.
Gopinath, S., Chintagunta, P. K., & Venkataraman, S. (2013). Blogs, advertising, and local-market movie box office performance. Management Science, 59(12), 2635–2654.
Hu, Y., Rex Yuxing, D., & Damangir, S. (2014). Decomposing the impact of advertising: Augmenting sales with online search data. Journal of Marketing Research, 51(3), 300–319.
Iyengar, R., Van den Bulte, C., & Lee, J. Y. (2015). Social contagion in new product trial and repeat. Marketing Science, 34(4), 408–429.
Johnson, B. T., Maio, G. R., & Smith-McLallen, A. (2005). Communication and attitude change: Causes, processes, and effects. In The Handbook of Attitudes, 617–670. London: Psychology Press.
Katona, Z., Zubcsek, P. P., & Sarvary, M. (2011). Network effects and personal influences: The diffusion of an online social network. Journal of Marketing Research, 48(3), 425–443.
Keller, E., & Barry, J. (2003). The Influentials: One American in ten tells the other nine how to vote, where to eat, and what to buy. New York: The Free Press.
Kim, H., & Bruce, N. I. (2018). Should sequels differ from original movies in pre-launch advertising schedule? Lessons from consumers’ online search activity. International Journal of Research in Marketing, 35(1), 116–143.
Lavidge, R. J., & Steiner, G. A. (1961). A model for predictive measurements of advertising effectiveness. Journal of Marketing, 25(6), 59–62.
Mackenzie, S. B., Lutz, R. J., & Belch, G. E. (1986). The role of attitude toward the ad as a mediator of advertising effectiveness: A test of competing explanations. Journal of Marketing Research, 23(2), 130–143.
Miniard, P. W., Bhalta, S., Lord, K. R., Dickson, P. R., & Rao Unnava, H. (1991). Picture-based persuasion processes and the moderating role of involvement. Journal of Consumer Research, 18(1), 92–107.
Onishi, H., & Manchanda, P. (2012). Marketing activity, blogging and sales. International Journal of Research in Marketing, 29(3), 221–234.
Peters, K., Chen, Y., Kaplan, A. M., Ognibeni, B., & Pauwels, K. (2013). Social media metrics—A framework and guidelines for managing social media. Journal of Interactive Marketing, 27(4), 281–298.
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer/Verlag.
Petty, R. E., Schumann, D. W., Richman, S. A., & Strathman, A. J. (1993). Positive mood and persuasion: Different roles for affect under high- and low-elaboration conditions. Journal of Personality and Social Psychology, 64(1), 5–20.
Proserpio, D., & Zervas, G. (2017). Online reputation management: Estimating the impact of management responses on consumer reviews. Marketing Science, 36(5), 645–665.
Resnick, M., Maloney, J., Monroy-Hernandez, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67.
Risselada, H., Verhoef, P. C., & Bijmolt, T. H. A. (2014). Dynamic effects of social influence and direct marketing on the adoption of high-technology products. Journal of Marketing, 78(2), 52–68.
Sonnier, G. P., McAlister, L., & Rutz, O. J. (2011). A dynamic model of the effect of online communications on firm sales. Marketing Science, 30(4), 702–617.
Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of Royal Statistical Society (Series B), 64(4), 583–639.
Srinivasan, S., Rutz, O. J., & Pauwels, K. (2016). Paths to and off purchase: Quantifying the impact of traditional marketing and online consumer activity. Journal of the Academy of Marketing Science, 44(4), 440–453.
Strong, E. K. (1925). The psychology of selling and advertising. New York: McGraw-Hill Inc..
Vakratsas, D., & Ambler, T. (1990). How advertising works: What do we really know? Journal of Marketing, 63(1), 26–43.
Van den Bulte, Christophe and Stefan Wuyts (2007), “New product diffusion with Influentials and imitators,” Marketing Science, 26 (3), 400–421.
Vaughn, R. (1980). How advertising works: A planning model. Journal of Advertising Research, 20(5), 27–33.
Vaughn, R. (1986). How advertising works: A planning model revisited. Journal of Advertising Research, 26(1), 57–66.
Weimann, G. (1991). The Influentials: Back to the concept of opinion leaders? Public Opinion Quarterly, 55(2), 267–279.
Xiong, G., & Bharadwaj, S. (2014). Prerelease buzz evolution patterns and new product performance. Marketing Science, 33(3), 1–22.
Yoganarasimhan, H. (2012). Impact of social network structure on content propagation: A study using YouTube data. Quantitative Marketing and Economics, 10(1), 111–150.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Shrihari Sridhar served as Area Editor for this article
Appendix
Appendix
MCMC Algorithm
The model is as follows:
where ψit is the dynamic latent instrument. We assume the following distributions for the error terms.
vijtk~N(0, Vijk), \( {\eta}_{ijk}^a\sim N\left(0,\mathit{\operatorname{var}}\left({\eta}_{ijk}^a\right)\right) \), \( {\eta}_{ijk}^b\sim N\left(0,\mathit{\operatorname{var}}\left({\eta}_{ijk}^b\right)\right) \), \( {\eta}_{ijk}^c\sim N\left(0,\mathit{\operatorname{var}}\left({\eta}_{ijk}^c\right)\right) \),\( {\varepsilon}_i^{learn}\sim N\left(0,\mathit{\operatorname{var}}\left({\varepsilon}_i^{learn}\right)\right) \), \( {\varepsilon}_i^{feel}\sim N\left(0,\mathit{\operatorname{var}}\left({\varepsilon}_i^{feel}\right)\right) \),\( {\varepsilon}_i^{do}\sim N\left(0,\mathit{\operatorname{var}}\left({\varepsilon}_i^{do}\right)\right) \), ϵilm~N(0, var(ϵilm)), \( {w}_{it}^{\psi}\sim N\left(0,\mathit{\operatorname{var}}\left({w}_{it}^{\psi}\right)\right) \), ξi~N(0, var(ξi)), and \( {\overset{\sim }{\mathbf{w}}}_{it}\equiv \left[{{\mathbf{w}}_{it}^{\boldsymbol{\uptheta}}}^{\prime}\kern0.5em {w}_{it}^{degree}\right]\sim MVN\left(0,{\boldsymbol{\Omega}}_i\right) \), where
The MCMC algorithm consists of two parts. In Part 1, we draw parameters for individual publishers and projects using Equations (1–1)-(1–3), (2–1), (3–1), (4–1), and (4–2). In Part 2, we shrink the individual-level parameters using Equations (2-2)-(2–4), (3–2)-(3–5), and (4–3). We iterate the two parts sufficiently to collect a representative sample from the posterior distribution. In this appendix, we illustrate the algorithm assuming J2: Feel → Learn → Do. Adaptation to other adoption paths is straightforward.
Part 1: Individual User-Level Parameters
Sample θit
We represent the model in the state-space framework. For user i, Equation (A.1), which is the reiteration of Equation (5), is the observation equation. Equation (A.2) is the state equation.
Let Vi be the diagonal matrix whose diagonal terms consist of Vij(k) and let Wi be the covariance matrix of the composite error vector \( {\mathbf{w}}_{it}^{\boldsymbol{\uptheta}} \) conditional on \( {w}_{it}^{degree} \)—i.e., \( {\mathbf{W}}_i={\mathbf{W}}_i^{\boldsymbol{\uptheta}}-{\boldsymbol{\Omega}}_i^{\theta n}{\left({W}_i^{nn}\right)}^{-1}{{\boldsymbol{\Omega}}_i^{\theta n}}^{\prime } \). We apply the forward-filtering/backward-sampling (FF/BS) algorithm (West and Harrison 1997) to draw θit. Let Dit denote the information set at t for user i.
- Forward Filtering.
(a) Posterior at t − 1: θit − 1 ∣ Dit − 1~N(mi, t − 1, Ci, t − 1).
(b) Prior at t: θit ∣ Dit − 1~N(ait, Rit) where \( {\mathbf{a}}_{it}={\mathbf{G}}_i{\mathbf{m}}_{i,t-1}+{\mathbf{u}}_{it}+{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}{\left({W}_i^{nn}\right)}^{-1}\left[ degre{e}_{it}-{\psi}_{it}\right],\kern0.5em {\mathbf{R}}_{it}={\mathbf{G}}_i{\mathbf{C}}_{i,t-1}{\mathbf{G}}_i^{\prime }+{\mathbf{W}}_i^{\boldsymbol{\uptheta}}-{\boldsymbol{\Omega}}_i^{\theta n}{\left({W}_i^{nn}\right)}^{-1}{{\boldsymbol{\Omega}}_i^{\theta n}}^{\prime } \).
(c) One-step ahead forecast of sit at t: sit ∣ Di, t − 1~N(fit, Bit) where \( {\mathbf{f}}_{it}={\mathbf{F}}_{it}^{\prime }{\mathbf{a}}_{it} \) and \( {\mathbf{B}}_{it}={\mathbf{F}}_{it}{\mathbf{R}}_{it}{\mathbf{F}}_{it}^{\prime }+{\mathbf{V}}_i \).
(d) Posterior at t: θit ∣ Dit~N(mit, Cit), where \( {\mathbf{m}}_{it}={\mathbf{a}}_{it}+{\mathbf{R}}_{it}{\mathbf{F}}_{it}{\mathbf{B}}_{it}^{-1}\left({\mathbf{s}}_{it}-{\mathbf{f}}_{it}\right) \) and \( {\mathbf{C}}_{it}={\mathbf{R}}_{it}-{\mathbf{R}}_{it}{\mathbf{F}}_{it}{\mathbf{B}}_{it}^{-1}{\mathbf{F}}_{it}^{\prime }{\mathbf{R}}_{it} \).
- Backward Sampling.
at t = T: θiT ∣ DiT~N(miT, CiT).
at t = T − 1, …, 0: θit ∣ θit − 1,Dit~N(git, Kit), where \( {\mathbf{g}}_{it}={\mathbf{m}}_{it}+{\mathbf{C}}_{it}{\mathbf{G}}_i^{\prime }{\mathbf{R}}_{i,t+1}^{-1}\left({\boldsymbol{\uptheta}}_{i,t+1}-{\mathbf{a}}_{i,t+1}\right) \) and \( {\mathbf{K}}_{it}={\mathbf{C}}_{it}-{\mathbf{C}}_{it}{\mathbf{G}}_i^{\prime }{\mathbf{R}}_{i,t+1}^{-1}{\mathbf{G}}_i{\mathbf{C}}_{it} \).
-
Step 1-1)
Sample γi11 and \( {\beta}_i^{learn} \)
Let \( {\overset{\sim }{\boldsymbol{\upbeta}}}_i={\left[{\gamma}_{i11}\kern0.5em {\beta}_i^{learn}\ \right]}^{\prime } \), \( {\overset{\sim }{\mathbf{x}}}_{it}=\left[{\theta}_{i,t-1}^{learn}\kern0.75em degre{e}_{it}\ \right],{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[p\right] \) be the pth element of the column vector \( {\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}=\left[{W}_i^{ln}\kern0.5em {W}_i^{fn}\kern0.5em {W}_i^{dn}\right] \), \( {\overset{\sim }{\mathbf{y}}}_{it}={\theta}_{it}^{learn}-{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[1\right]{\left({W}_i^{nn}\right)}^{-1}\left( degre{e}_{it}-{\psi}_{it}\right) \), and \( {\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}=\boldsymbol{\Omega} \left[1,1\right]-{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[1\right]{\left({W}_i^{nn}\right)}^{-1}{{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}}^{\prime}\left[1\right] \). Then, \( {\overset{\sim }{\boldsymbol{\upbeta}}}_i\sim MVN\left({\overset{\sim }{\mathbf{b}}}_i,\kern0.5em {\overset{\sim }{\mathbf{S}}}_i\right) \), where \( {\overset{\sim }{\mathbf{S}}}_i={\left[{\left({\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}\right)}^{-1}{{\overset{\sim }{\mathbf{X}}}_i}^{\prime }{\overset{\sim }{\mathbf{X}}}_i+{\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1}\right]}^{-1} \) and \( {\overset{\sim }{\mathbf{b}}}_i={\overset{\sim }{\mathbf{S}}}_i\left[{\left({\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}\right)}^{-1}{{\overset{\sim }{\mathbf{X}}}_i}^{\prime }{\overset{\sim }{\mathbf{y}}}_i+{\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1}{\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i\right] \). \( {\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1} \) is the prior covariance matrix and \( {\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i \) are the prior mean. That is, \( {\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i={\left[{\overline{\gamma}}_{11}\kern0.75em \overline{\beta^{learn}}\kern0.5em \right]}^{\prime } \) and \( {\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i} \) is a diagonal matrix whose diagonal elements are var(ϵi11) and \( \mathit{\operatorname{var}}\left({\varepsilon}_i^{learn}\right) \).
-
Step 1-2)
Sample γi21, γi22, and \( {\beta}_i^{feel} \)
Let \( {\overset{\sim }{\boldsymbol{\upbeta}}}_i={\left[{\gamma}_{i21}\kern0.5em {\gamma}_{i22}\ {\beta}_i^{feel}\ \right]}^{\prime } \), \( {\overset{\sim }{\mathbf{x}}}_{it}=\left[{\theta}_{i,t-1}^{learn}\kern0.5em {\theta}_{i,t-1}^{feel}\kern0.5em degre{e}_{it}\ \right] \), \( {\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[p\right] \) be the pth element of the column vector \( {\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}=\left[{W}_i^{ln}\kern0.5em {W}_i^{fn}\kern0.5em {W}_i^{dn}\right] \), \( {\overset{\sim }{\mathbf{y}}}_{it}={\theta}_{it}^{feel}-{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[2\right]{\left({W}_i^{nn}\right)}^{-1}\left( degre{e}_{it}-{\psi}_{it}\right) \), and \( {\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}=\boldsymbol{\Omega} \left[2,2\right]-{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[2\right]{\left({W}_i^{nn}\right)}^{-1}{{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}}^{\prime}\left[2\right] \). Then, \( {\overset{\sim }{\boldsymbol{\upbeta}}}_i\sim MVN\left({\overset{\sim }{\mathbf{b}}}_i,\kern0.5em {\overset{\sim }{\mathbf{S}}}_i\right) \), where \( {\overset{\sim }{\mathbf{S}}}_i={\left[{\left({\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}\right)}^{-1}{{\overset{\sim }{\mathbf{X}}}_i}^{\prime }{\overset{\sim }{\mathbf{X}}}_i+{\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1}\right]}^{-1} \) and \( {\overset{\sim }{\mathbf{b}}}_i={\overset{\sim }{\mathbf{S}}}_i\left[{\left({\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}\right)}^{-1}{{\overset{\sim }{\mathbf{X}}}_i}^{\prime }{\overset{\sim }{\mathbf{y}}}_i+{\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1}{\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i\right] \). \( {\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1} \) is the prior covariance matrix and \( {\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i \) are the prior mean. That is, \( {\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i={\left[{\overline{\gamma}}_{21}\ {\overline{\gamma}}_{22}\kern0.5em \overline{\beta^{feel}}\kern0.5em \right]}^{\prime } \) and \( {\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i} \) is a diagonal matrix whose diagonal elements are var(ϵi21), var(ϵi22),and \( \mathit{\operatorname{var}}\left({\varepsilon}_i^{feel}\right) \).
-
Step 1-3)
Sample γi32, γi33, and \( {\beta}_i^{do} \)
Let \( {\overset{\sim }{\boldsymbol{\upbeta}}}_i={\left[{\gamma}_{i32}\kern0.5em {\gamma}_{i33}\ {\beta}_i^{do}\ \right]}^{\prime } \), \( {\overset{\sim }{\mathbf{x}}}_{it}=\left[{\theta}_{i,t-1}^{feel}\kern0.5em {\theta}_{i,t-1}^{do}\kern0.5em degre{e}_{it}\ \right] \), \( {\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[p\right] \) be the pth element of the column vector \( {\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}=\left[{W}_i^{ln}\kern0.5em {W}_i^{fn}\kern0.5em {W}_i^{dn}\right] \), \( {\overset{\sim }{\mathbf{y}}}_{it}={\theta}_{it}^{feel}-{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[3\right]{\left({W}_i^{nn}\right)}^{-1}\left( degre{e}_{it}-{\psi}_{it}\right) \), and \( {\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}=\boldsymbol{\Omega} \left[3,3\right]-{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}\left[3\right]{\left({W}_i^{nn}\right)}^{-1}{{\boldsymbol{\Omega}}_i^{\boldsymbol{\uptheta} n}}^{\prime}\left[3\right] \). Then, \( {\overset{\sim }{\boldsymbol{\upbeta}}}_i\sim MVN\left({\overset{\sim }{\mathbf{b}}}_i,\kern0.5em {\overset{\sim }{\mathbf{S}}}_i\right) \), where \( {\overset{\sim }{\mathbf{S}}}_i={\left[{\left({\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}\right)}^{-1}{{\overset{\sim }{\mathbf{X}}}_i}^{\prime }{\overset{\sim }{\mathbf{X}}}_i+{\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1}\right]}^{-1} \) and \( {\overset{\sim }{\mathbf{b}}}_i={\overset{\sim }{\mathbf{S}}}_i\left[{\left({\overset{\sim }{\mathbf{W}}}_i^{\boldsymbol{\uptheta}}\right)}^{-1}{{\overset{\sim }{\mathbf{X}}}_i}^{\prime }{\overset{\sim }{\mathbf{y}}}_i+{\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1}{\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i\right] \). \( {\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i}^{-1} \) is the prior covariance matrix and \( {\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i \) are the prior mean. That is, \( {\overline{\overset{\sim }{\boldsymbol{\upbeta}}}}_i={\left[{\overline{\gamma}}_{32}\ {\overline{\gamma}}_{33}\kern0.5em \overline{\beta^{do}}\kern0.5em \right]}^{\prime } \) and \( {\Sigma}_{{\overset{\sim }{\boldsymbol{\upbeta}}}_i} \) is a diagonal matrix whose diagonal elements are var(ϵi32), var(ϵi33),and \( \mathit{\operatorname{var}}\left({\varepsilon}_i^{do}\right) \).
Sample Ωi
The relevant equations are
The posterior distribution of Ωi is given by \( {\boldsymbol{\Omega}}_i\sim IW\left({T}_i+{q}_i,\left({V}_i+{\sum}_{t=1}^{T_i}\left({\boldsymbol{\upphi}}_{it}\right){\left({\boldsymbol{\upphi}}_{it}\right)}^{\prime}\right)\right), \)
where qi = 6, Vi = 10−6I4, and \( {\boldsymbol{\upphi}}_{it}=\left[\begin{array}{c}{\theta}_{it}^{learn}-{\beta}_i^{learn} degre{e}_{it}-{\gamma}_{i11}{\theta}_{i,t-1}^{learn}\\ {}{\theta}_{it}^{feel}-{\beta}_i^{feel} degre{e}_{it}-{\gamma}_{i21}{\theta}_{i,t-1}^{learn}-{\gamma}_{i22}{\theta}_{i,t-1}^{feel}\\ {}{\theta}_{it}^{do}-{\beta}_i^{do} degre{e}_{it}-{\gamma}_{i32}{\theta}_{i,t-1}^{feel}-{\gamma}_{i33}{\theta}_{i,t-1}^{do}\\ {} degre{e}_{it}-{\psi}_{it}\end{array}\right] \) .
Sample ψit
We transform into the reduced form of the model. Equations (A.6) is the observation equation. Equation (A.7) is the state equation.
Let \( {\overset{\sim }{\mathbf{V}}}_i \) be the covariance matrix of the composite error \( {\overset{\sim }{\mathbf{v}}}_{it} \): \( {\overset{\sim }{\mathbf{V}}}_i={\mathbf{L}}_i{\boldsymbol{\Omega}}_i{\mathbf{L}}_i^{\prime } \), where
We apply the FFBS algorithm (West and Harrison 1997) to draw ψit.
-
Step 1-4)
Sample υi and \( \mathit{\operatorname{var}}\left({w}_{it}^{\psi}\right) \)
The relevant equation is \( {\psi}_{it}={\upsilon}_i{\psi}_{it-1}+{w}_{it}^{\psi } \). We apply a normal-inverse Gamma prior. The prior distribution of υi comes from Equation (4-3): \( {\upsilon}_i\sim N\left(\overline{\upsilon},\kern0.5em \mathit{\operatorname{var}}\left({\xi}_i\right)\right) \). We use a diffuse inverse-Gamma prior for \( \mathit{\operatorname{var}}\left({w}_{it}^{\psi}\right) \): \( \mathit{\operatorname{var}}\left({w}_{it}^{\psi}\right)\sim IG\left(1,0.001\right) \).
-
Step 1-5)
Sample aij(k), bij(k), cij(k), and Vij(k)
The relevant regression equation comes from (WA.1): \( {s}_{ijt k}={a}_{ijk}{\theta}_{it}^{stag{e}_k}+{b}_{ijk}{y}_{ijt-1,k}{\theta}_{it}^{stag{e}_k}+{c}_{ijk}{y}_{ijt-1,k}^2{\theta}_{it}^{stag{e}_k}+{v}_{ijt k} \), where stagek is the journey stage corresponding to activity k (e.g., stageplay = learn, stagedownload = do). We use a normal-inverse Gamma prior. The prior distribution of aijk, bijk, and cijk come from Equations (2-2)-(2–4)—for example, the prior distribution of aijk is \( {a}_{ijk}\sim N\left(\overline{a_k},\kern0.5em \mathit{\operatorname{var}}\left({\eta}_{ijk}^a\right)\right) \). We use a diffuse inverse-Gamma prior for Vijk: Vijk~IG(1, 0.0001).
Part 2: Shrinkage (The Population Parameters)
-
Step 2-1)
Sample \( \overline{a_k} \) and \( {\Xi}_k^a\equiv \mathit{\operatorname{var}}\left({\eta}_{ijk}^a\right) \)
The relevant equation is Equation (2-2): \( {a}_{ijk}=\overline{a_k}+{\eta}_{ijk}^a \) where aijk is drawn in Step 1–8). We use a diffuse normal-inverse Gamma prior to sample \( \overline{a_k} \) and \( {\Xi}_k^a \).
-
Step 2-2)
Sample \( \overline{b_k} \) and \( {\Xi}_k^b\equiv \mathit{\operatorname{var}}\left({\eta}_{ijk}^b\right) \)
The relevant equation is Equation (2-3): \( {b}_{ijk}=\overline{b_k}+{\eta}_{ijk}^b \) where bijk has been drawn in Step 1–8). The sampling procedure is identical to that in Step 2–1).
-
Step 2-3)
Sample \( \overline{c_k} \), and \( {\Xi}_k^c\equiv \mathit{\operatorname{var}}\left({\eta}_{ijk}^c\right) \)
The relevant equation is Equation (2-4): \( {c}_{ijk}=\overline{c_k}+{\eta}_{ijk}^c \) where cijk has been drawn in Step 1–8). The sampling procedure is identical to that in Step 2–1).
-
Step 2-4)
Sample \( \overline{\beta^{stage}} \) and \( {\varOmega}^{stage}\equiv \mathit{\operatorname{var}}\left({\varepsilon}_i^{stage}\right) \)
The relevant equation is Equations (3-2)-(3–4). For example, the relevant equation for Learn stage is Equation (3-2): \( {\beta}_i^{learn}=\overline{\beta^{learn}}+{\varepsilon}_i^{learn} \). We use a diffuse normal-inverse Gamma prior to sample \( \overline{\beta^{stage}} \) and Ωstage.
-
Step 2-5)
Sample \( {\overline{\gamma}}_{lm} \) and Σlm ≡ var (ϵilm)
The relevant equation is Equation (3-5): \( {\gamma}_{ilm}={\overline{\gamma}}_{lm}+{\epsilon}_{ilm} \). We use a diffuse normal-inverse Gamma prior to sample \( {\overline{\gamma}}_{lm} \) and Σlm.
-
Step 2-6)
Sample \( \overline{\upsilon} \) and var(ξi).
The relevant equation is Equation (4-3): \( {\upsilon}_i=\overline{\upsilon}+{\xi}_i \). We use a diffuse normal-inverse Gamma prior to sample \( \overline{\upsilon} \) and var(ξi).
Rights and permissions
About this article
Cite this article
Kim, H., Jiang, J. & Bruce, N.I. Discovering heterogeneous consumer journeys in online platforms: implications for networking investment. J. of the Acad. Mark. Sci. 49, 374–396 (2021). https://doi.org/10.1007/s11747-020-00741-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11747-020-00741-3