Advertisement

Contributions to Theory, Method, and Practice

  • Arch Woodside
  • Rouxelle de Villiers
  • Roger Marshall
Chapter
  • 614 Downloads

Abstract

The core principle on which this study is based is that what often appears as “common sense” or “known truths”, and what sometimes appears in the literature as truth without evidence and without formal testing of its validity, needs to be formally and scientifically studied. An example of such truths can be found in the book Redirect by Timothy Wilson (2011), which challenges the “known truth” that victims of trauma or abuse will benefit from immediate psychological counselling or crucial incident stress debriefing (CSID). Wilson provides evidence that offering grief counselling immediately after a tragedy or traumatic incident is not helpful as a strategy. The recommended and “better” strategy is to allow victims/survivors to deal with the trauma by using story-telling and journal writing a few weeks after the critical event.

Keywords

Causal Condition Service Recovery Analytical Hierarchical Process Competency Training Dominant Logic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

8.1 Core Principles

The core principle on which this study is based is that what often appears as “common sense” or “known truths”, and what sometimes appears in the literature as truth without evidence and without formal testing of its validity, needs to be formally and scientifically studied. An example of such truths can be found in the book Redirect by Timothy Wilson (2011), which challenges the “known truth” that victims of trauma or abuse will benefit from immediate psychological counselling or crucial incident stress debriefing (CSID). Wilson provides evidence that offering grief counselling immediately after a tragedy or traumatic incident is not helpful as a strategy. The recommended and “better” strategy is to allow victims/survivors to deal with the trauma by using story-telling and journal writing a few weeks after the critical event.

Propositions on how to develop decision competence and what training methods affect management decision competency often appear in the literature as truth without evidence. The core purpose of this research is to formally test the validity of combinations of training methods for business schools to improve decision competency. Scientific assessment of methods is common practice in psychology and applied business, both in laboratory and field contexts, and it somewhat surprising that business schools have not made much progress in testing useful configurations of teaching methods to improve decision competence. Using treatment and control groups as a method of finding tested and valid interventions is well established (Campbell & Stanley, 1963) both in laboratory and field contexts, but the majority of research papers still seems to use self-administered surveys and focus on net outcomes.

Armstrong and Green (2007) writes of hostility towards such testing and cites the resistance of academics and the long battle to get a paper about a widely used decision aid—the Boston Consulting Group (BCG) matrix—published in recognized academic literature. He advises researchers to pursue scientific research, concomitant with resistance, in the pursuit of excellence. Another cornerstone to this study is the belief that “method shapes thinking and theory crafting” (Gigerenzer, 1991 cited in Woodside, 2013, p. 1). Woodside (2011a, b) warns against the limitations of the dominant methods of MRA and SEM and suggests that scholars use algorithms and fiscal as tools to develop theory in social science and management. Rong and Wilkinson (2011) expose many shortcomings in use of cross-sectional self-report surveys to collect data on decision-making executives. They lament that most studies do not include attempts to create and test alternative causal sequences in managerial research. In response to the warnings and advice offered by these celebrated authors, this study takes up the challenge to look beyond “net effects” and the reliance on self-report surveys in order to find necessary and/or sufficient “key success factors” for decision competence.

This study relates to the above quests and the view that method drives theory, which runs counter to the dominant logic that method naturally follows theory. The study here examines configural recipes combining treatment and measures antecedents for their impact on high decision competence outcomes, rather than adopting the dominant logic of proposing to study the “net effects” of the individual treatment conditions (variables) and the relative size of the net impact by comparing standardised betas. This study adopts the view that no single treatment or measured antecedent is sufficient or necessary for high decision competence.

Following Armstrong and Brodie (1994), a true experimental design with administration of treatment and control (placebo) conditions for proper or scientific testing of the real value of propositions was implemented. This research takes a meaningful step towards examining combinations of tools for conscious thinking and contextual elements by studying different thinking tools as well as characteristics of the participants such as age, gender, education and management experience. The study of such combinations is a core recommendation of Simon (1992), in which the author presents a dual-blade (scissors) analogy that combines cognitive intelligence (here decision competence) with the context of the problem. A valuable advantage of the design and analysis methodology adopted in this study is that the researcher can study the potential configural causes of high competence outcomes and simultaneously, with the same rigour, the configural causes of making poor choices of incompetent decisions. This study therefore extends the work of Armstrong, Weick, Gigerenzer, and Simon and build on the study by Spanier (2011) on sacrosanct announcements in managerial education as to successful competency training methods.

This study’s laboratory experiment examines four decisions in four separate marketing management realms and is to the best of the researcher’s knowledge the first to experiment on a large scale with tools for thinking well and for improving training in marketing decision-making by using true experiments and configural analysis (QCA) to test propositions and useful recipes for competence and incompetence. Using fsQCA allows robust research despite small-N cases. Often experiments cannot be designed to have sufficient statistical power (of at least 30 cases per cell) to test models and propositions. Configural analysis in contrast permits testing for few cases (5–10) per cell and is thus isomorphic with what happens in real life.

The study replicates four decision points in the separate domains, thus generating multiple decisions and contexts but keeping the measured antecedents related to the decision-makers (participants) constant. Four sacred pronouncements challenged by scholars in the literature and by this study are: (1) facts and evidence-based decisions versus peer opinions and overconfidence in one’s incompetence; (2) the use of fast and frugal heuristics versus analytical hierarchical processes (AHP) such as the use of weight prioritisation matrix; (3) market share and competitor orientation versus profit maximisation; (4) media overage versus cash-flow and return on investment (ROI). As stated earlier, none of these models are necessarily associated with high incompetence, but both goals and context impact their effectiveness as decision aids. An example quoted earlier is the proposal by Weick, Sutcliffe, and Obstfeld (1999) of highly reliable organisation theory (HRO) as a counterpoint to profit maximisation. Such theories run counter to the dominant logic and frequently shock because they contain recommendations that are likely to change preconceived beliefs and firmly held misconceptions – in direct opposition to the dominant logic of the time.

8.2 Contribution of This Study

This present book extends the theories relating to management competency development and education in decision- and sense-making and adds to the seminal works of Boyatzis, Armstrong, Schank, Brodie, Gigerenzer and other management and marketing experts. The propositions are rigorously tested with regards to the managerial training methods best suited to aid in decision competency and decision confidence. The study makes nominal advances in guidelines regarding new or improved tools to prevent graduate managers from making incompetent choices or decisions, and reductions in their inability to drop their tools and previously acquired knowledge—should the circumstances favour doing so. Although there is evidence to support the statement by Spanier (2011, p. 94) that “good decision-making can be taught”, the QCA procedures and additional analysis of data sets did not always succeed in identifying clear-cut causal conditions or “solutions” to indicate unambiguously “how”. Unfortunately there are no simple answers to this, as demonstrated in Chaps.  4 6. The many different configurations of causal conditions (equifinality) send a clear message to educators and talent developers: Simon’s (1992) scissors analogy and Bandura’s three-factor human efficacy theory need to be constantly borne in mind when considering teaching methods. That is, educators need to be constantly aware that cognitive, behavioural, and environmental factors impact competency development. Context, conduct and cognition are important considerations for any and all managerial development interventions. No catch-all method (e.g. placing students in groups, using role-play or providing competency training in isolation) will work for all contexts, all problems and/or all students. Educators and managers need to assist students and protégés with a tool kit of decision-making aids, but students need to practise how to use them and when to not use (“drop”) them.

This study contributes to the body of knowledge regarding organisational knowledge, organisational learning, management development and experiential learning. A further contribution of particular use to management practitioners and HR specialists is the four tested in-basket simulations for use in assessment and selection centres. Experientialists (Feldman & Lankau, 2005; Gosen & Washbush, 2004) ask for high quality exercises and this study contributes four laboratory and field-tested in-basket simulations. Faculty responsible for re-engineering the MBA curricula (or other management education and development interventions) now have access to empirically supported knowledge regarding the four laboratory tested teaching methodologies.

This study applies QCA as method and set of techniques to the study of managerial decision competency and incompetency, as well as to the study of MBA andragogy. This study is, to the best of the author’s knowledge, the first to apply the fsQCA approach to these disciplines. Given the limitations and complications experienced with traditional statistical and quantitative methods, the existence of a well-documented example of the application of this tool in managerial development could be of great value.

QCA demands transparency from the researcher and this means that it is possible for other researchers to take this study as a starting point and to “re-visit the analysis, for instance taking a few cases out or bringing a few cases in, adding one or two conditions, changing the way some conditions have been dichotomized, etc. … Because QCA is a case-sensitive technique (De Meur, Rihoux, & Yamasaki, 2008), the result of these other analyses will most probably yield some different formulae … which in turn will further enrich cross-case interpretation” (Rihoux & Lobe, 2008, p. 237). In this way, the conceptual work and detailed experimental tools (e.g. in-basket simulations, competency and incompetency training aids) will greatly reduce the preparatory time and labour-intensity of an experiment of this nature by providing pre-tested materials to use as launch-pad into further research. But, there are many unanswered questions and thus the research journey has only just begun. The next section sets out some suggestions and warnings for future research projects to assist in extending the work done thus far even further.

Oral feedback immediately after concluding the laboratories, and more recently written feedback from participating MBA students, indicated enhanced self-confidence in completing in-basket assessments during job interviews, plus the additional benefit of experimenting with the “new” decision aids used in the laboratory. The author is both a lecturer and business consultant so feedback of this nature is very rewarding. Evidence of said feedback is available upon request.

8.3 Limitations and Insights Useful for Designing Repeat Studies

What we know now that we did not know before and what we would have done differently

The following limitations may have affected the results of the study. First, although the researcher made every attempt to control all variables in the experiment, a large number of variables may affect the causal conditions as well as the final outcome of the experiment. Such variables may include factors impacting on participants’ cognitive abilities and cause varying levels of interest and motivational distractions or “noise”, such as fatigue; personal debilitating emotional factors; existing dislikes or likes between group members resulting in bias towards expressed options (even if randomly allocated students are in a relatively small corps within the university); unpleasant previous experience in events similar to those described in the scenarios; and physical discomfort due to circumstances outside of the control of the researcher such as ailments and other such inter- and intra-personal factors. The experimental studies were timed to (1) accommodate pressures of examinations, so studies were performed in weeks 2–5 of the 8-week terms, and (2) participants could select from four different times of the day and four different days of the week.

Secondly, competency and incompetency training was provided in the form of written instructions. Learner styles differ, and some learners, classified as “auditory” in the literature, absorb information better when presented verbally, whilst others, classified as “kinaesthetic”, learn better through demonstrations and touch. Accordingly, the use of written competency and incompetency training matter is a likely impact factor that was not controlled for in this experiment. Whilst random assignment of participants might have reduced or even negated this impact, the study cannot report on the effect of learner style with any authority. In addition, with the wisdom of 20–20 vision after completing the study and the analysis if the data, the researcher would implement the in-basket training quite differently. Students should be able to verbalise their interpretation of the written training support material and at least have one practice session, focused solely on the teaching method (not the decisions, but the process of getting to the decisions). Although the role-play, the goals, the use of the devil’s advocate dissent and the need to extract insights from the group members were actively and thoroughly stressed in the briefing leading up to the 2-h experiment, future research should allow participants to practise these inter-active, simulated roles before the experiment. Although the study does not provide evidence for the following claim, it is the researcher’s perception that there was so much focus on getting to the decision that procedural instructions took a back seat and the front seat belonged to “getting the answer right”.

Third, a large proportion (39.3 %) of the participating student sample indicated fewer than 5 years management decision-making experience. While it would be highly desirable to use currently employed managers to improve the generalisability to business executives, of primary importance to this study is the improvement of andragogy for MBA students, thus negating this limitation for this study. Further, the skill set and demographics of the participating students are compatible with the larger population of practising managers. Age, gender, race and experience levels vary greatly within organisations and demographics gathered from NZTE correspond well with the demographics provided by the participants. These demographic indicators may differ substantially in other countries and in other universities.

8.3.1 Pre-existing Experience and Skills

Without a pre-test it was not possible to identify pre-existing skills or decision competencies. A pre-test though might have (1) prepared the students to anticipate contextual factors and (2) allowed discussion amongst the very small MBA consort and thus contaminated the results. In hindsight, it is desirable to have a more quantitative measure of pre-test decision competence than the self-recorded measure of experience as captured in the demographic section of the study. The assumption that all MBA students have comparable levels of decision competency might not be sound and further evidence and a quantifiable, assessable measure of pre-test decision competency is required. Prior knowledge could (and perhaps should) be ascertained by pre-tests. Another way to examine this problem is to selectively pre-familiarise some participants with the issues related to the decisions and determine the impact of this prior exposure to the results achieved by participants not exposed to the materials and competency training aids. Random allocation to the control group should have countered this, which means that the outcome is compared to a control group, rather than improved performance of an individual against his/her own prior performance.

Repeating the study with currently employed managers and compare those results with the results achieved for MBA students would be desirable. The value of such a repeated study might contribute to the predictive validity as well as the generalisability of the study. The importance of predictive validity needs to be recognized, however this study does not include estimating predictive validity, only fit validity due to its exploratory nature.

8.3.2 Time and Timing

Although the pre-test indicated that the 2-h time allocated for the four in-basket decisions was (1) realistic (as managers might allocate for the for decision-tasks or be the relative time allowed for these activities because of other pressures encountered in the real world) and (2) sufficient, some non-verbal indicators (such as surprise or upset when the instructor indicated time was up) flagged the limited time allotment as a possible impact factor. This is especially relevant for the eight cells for which one of the causal factors included interactive group decision-making and discussion. No concerns were raised in experimental groups where decisions were made as individuals and all participants handed their decision sheets in well before the 2 h expired. It might thus be useful to add an additional causal condition in future experiments where the time allotment is much more generous (say 3–4 h) to measure and record the impact of additional time for more in-depth group discussions and more time to consider the options available. An additional indicator that the time allocated for this study might have been too short is the very cryptic sentences used in the open-ended questions. Again, the recommended remedy is a series of qualitative questions to be conducted immediately upon conclusion of the simulated group decision intervention and after participants have completed the decision and demographic sheets. This might have assisted the researcher to have better understanding of the factors impacting participants’ final decisions and allowed for richer insight into each of the cases in a cell. However, in the words of Gigerenzer and Brighton (2009, p. 116),“more information, computation and time is not always better”, so allowing for different conditions in future experiments (including additional discussion or decision time) and allowing different heuristics and their application by the participants need further examination.

8.3.3 Bounded Rationality

Participants were provided with limited information and although typical of MBA cases, this is not reflective of real-life marketing and management decisions. One of the complaints about MBA training is the inability of students to determine relevant and irrelevant information. This lack of information and availability of an abundance of irrelevant information (as in real life) was addressed in a small way through the provision of some facts to be ignored, but very minimally. Further, incompetency training tested participants’ competency in determining relevant facts in different contexts and “drop tools” strategies to make the most effective decisions. In future in-basket simulations should perhaps be reduced in number to keep the time realistic, but the number of information sheets and decision support aids should be increased to reflect real-life more closely.

8.3.4 Consensus

Groups were not required to reach consensus. The researcher deliberately chose not to ask participants to reach consensus and instructed the instructor to stress this point carefully during the pre-experiment briefing. Reasons for this were: (1) more time is needed; (2) one decision per cell might not indicate participants’ own choice despite the interaction; (3) group dynamics are quite different when groups attempt to reach consensus; and (4) demanding consensus implies a group facilitator or leader will need to lead people to that point. The researcher did not want to complicate the decision-making process by either appointing a group leader or investing the additional time required for newly formed groups to decide on or allow a natural leader to appear. Although this can be seen as a key strength, future studies could consider a comparison between consensus decision-making outcomes and group interactive decision-making results.

8.3.5 Group Dynamics

Consensus associates closely to team work and group dynamics. In real-world scenarios managers often make group decisions during or after interaction with teams they are familiar with. (This may vary substantially from circumstance to circumstance.) Decision-makers may have spent many hours developing team norms and team goals and thus the dynamics may be very different from those displayed during the experiment. The issue of team formation status (i.e. where they are in the process of forming, storming, norming, performing, mourning) has not been accommodated (Firestien & McCowan, 1988; Osborn, 1963; Putman & Paulus, 2009; Todd, 2005), nor its impact tested in this study. Participants were given very brief instructions about group interaction in order to optimise the 20 min provided for group interaction during the experiment. These instructions were brief and pointed, but there is no evidence that these instructions (1) were adopted/accepted or (2) implemented during the group discussions. In addition, the 20 min groups were allowed for interactive role-play and decision-making for each separate in-basket did not allow much time to build cohesive groups (or enter into the five-phase group development process) and see the impact of group dynamics. Some groups may know each other better than others; again random allocation should overcome this, but no specific controls were in place to have similar levels of personal experience in the same team.

In addition, future experiments replicating or extending this study could ask participants to assess the impact to some level by using the suggested additional feedback sheet in Fig. 8.1. Although this is a self-report survey, this additional case knowledge could provide additional insight(s) of value to educators and practitioners.
Fig. 8.1

Suggested self-report mechanism on final feedback sheet

8.3.6 Range of Topics

The decision topics were deliberately selected to be mostly marketing related (market share, key clients, service recovery, pricing, advertising, selling and sales training). To ensure verisimilitude and generalisability a broader range of management decisions might be necessary. In addition, the in-basket simulations were only a few pages of details, whereas in real-life business scenarios an information over-load, as well as the inter-linked nature of decisions, are likely to be part of the decision dilemma. To be a better copy of reality, future research regarding decision competence could include more complex scenarios, with more useful and more useless information in the scenario material. In addition, decisions impacting more than one strategic business unit should be included with more than one or two aspects to compare. In addition, the decisions in the four in-basket simulations were mostly tactical in nature. Little to no incentive was provided for participants to consider the wider context within neither the firm nor the long-term impact of the decision. Agency theory (Eisenhardt, 1989) also suggests that decision-makers are likely to consider their own gain and this factor was not built into the decision-making activities. The extant literature indicates that this plays a significant role in the decisions managers make.

8.3.7 Multiple Choice

Participants were provided with a limited range of answers to each of the in-basket assessments. In addition, qualitative studies would allow gaining additional insights into the reasoning process by following the decisions with in-depth interviews. Participants were not given the opportunity to rationalise or qualify their decisions due to the very limited time-frame. Pre-tests indicated that busy executives and MBA students were not likely to spend more than two hours in the laboratory. The additional time required to interview participants will thus remain a challenge for future research. A suggestion to consider is the use of open-ended questions that might improve participants’ ability to indicate an unlimited range of choices and decisions, considering a variety of factors. The multiple-choice style decision sheets might have provided answers that do not represent participants’ decisions fully. Students were not given the alternative of another option, beyond those provided in the multiple-choice, nor the opportunity to indicate their level of satisfaction with the answers provided or choices made.

8.3.8 Confidence and Likelihood to Change Decisions

Participants’ self-confidence in their decisions was only tested with a single question. Future researchers should not merely rely on self-recorded measures but should test this confidence. In addition, the question about likelihood to change the decision (after 2 weeks) relied on a selection from four Likert scale indicators. Although the nature and scope of this study did not allow the researcher to repeat the experiment after 2 weeks to validate/disprove the participants’ choice, it would be advisable to do so in future research of this nature. Following the experiment up with an additional chance to reconsider the decision would allow testing hypotheses about the power of “unconscious deliberation” but may counter the “take the best” heuristic. This needs to be tested fully.

Although fsQCA is designed for small-sample experiments, some scholars may find it desirable to replicate the study with a larger and more diverse study sample, thus improving the generalisability. The study considered only students from four universities in New Zealand and although the recorded demographics indicated a very diverse group of participants (age, gender, ethnicity and nationality) it is highly desirable to replicate this experiment with large student groups in other business faculties within New Zealand and in other countries.

8.4 A Final Thought

Finally, the need for accurate assessment of the value of tools to increase decision competence via controlled experimentation will continue beyond any single study. Similar to real-world decision-makers, researchers need to avoid the fallacy that the tools presently being used cannot be improved upon. This study makes a real and measurable contribution to the refinement of research instruments designed to investigate and assess the impact of competency and incompetency training on nurturing executives’ opposable minds through decision competency and incompetency development. Research in this field will continue, and the findings of some of this research will encourage other researchers in the field to further refine understanding of both effective and ineffective decision-making processes.

References

  1. Armstrong, J. S., & Brodie, R. (1994). Effects of portfolio planning methods on decision making: Experimental results. International Journal of Marketing Research, 11(1), 73–84.CrossRefGoogle Scholar
  2. Armstrong, J. S., & Green, K. C. (2007). Competitor-oriented objectives: Myth of market share. International Journal of Business, 12(1), 116–136.Google Scholar
  3. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook on research on teaching (pp. 171–246). Chicago, IL: Rand McNally.Google Scholar
  4. De Meur, G., Rihoux, B., & Yamasaki, S. (2008). Addressing the critiques of QCA. In B. Rihoux & C. Ragin (Eds.), Configural comparative methods. Qualitative comparative analysis (QCA) and related techniques. Thousand Oaks, CA: Sage.Google Scholar
  5. Eisenhardt, K. M. (1989). Agency theory: An assessment and review. The Academy of Management Review, 14(1), 57–74. doi: 10.2307/258191.Google Scholar
  6. Feldman, D. C., & Lankau, M. J. (2005). Executive coaching: A review and agenda for future research. Journal of Management, 31(6), 829–848. doi: 10.1177/0149206305279599.CrossRefGoogle Scholar
  7. Firestien, R. L., & McCowan, R. J. (1988). Creative problem solving and communication behavior in small groups. Creativity Research Journal, 1(1), 106–114. doi: 10.1080/10400418809534292.CrossRefGoogle Scholar
  8. Gigerenzer, G. (1991). From tools to theories: A heuristic of discovery in cognitive psychology. Psychological Review, 98(2), 254–267. doi: 10.1037//0033-295X.98.2.254.CrossRefGoogle Scholar
  9. Gigerenzer, G., & Brighton, H. (2009). Homo Heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1, 107–143. doi: 10.1111/j.1756-8765.2008.01006.x.CrossRefGoogle Scholar
  10. Gosen, J., & Washbush, J. (2004). A review of the scholarship on assessing experiential learning effectiveness. Simulation and Gaming, 35(2), 270–293. doi: 10.1177/1046878104263544.CrossRefGoogle Scholar
  11. Osborn, A. F. (1963). Applied imagination (3rd ed.). New York, NY: Scribner.Google Scholar
  12. Putman, V. L., & Paulus, P. B. (2009). Brainstorming, brainstorming rules and decision making. Journal of Creative Behavior, 43(1), 23–40. doi: 10.1002/j.2162-6057.2009.tb01304.x.CrossRefGoogle Scholar
  13. Rihoux, B., & Lobe, B. (2008). The case for qualitative comparative analysis (QCA): Adding leverage for thick cross-case comparison. In D. Byrne & C. C. Ragin (Eds.), The Sage handbook of case-based methods (pp. 222–242). Thousand Oaks, CA: Sage.Google Scholar
  14. Rong, B., & Wilkinson, I. (2011). What do managers’ survey responses mean and what affects them? The case of marketing orientation and firm performance. Australasian Journal of Marketing, 19(3). doi: 10.1016/j.ausmj.2011.04.001.Google Scholar
  15. Simon, H. A. (1992). Economics, bounded rationality and the cognitive revolution. Aldershot Hants: Elgar.Google Scholar
  16. Spanier, N. (2011). Competence and incompetence training, impact on executive decision-making capability: Advancing theory and testing. Doctoral thesis. Auckland University of Technology, Auckland.Google Scholar
  17. Todd, D. W. (2005). The impact of motivation and conflict escalation on the five zone model for preferred conflict handling and managerial decision making. PhD dissertation, Georgia State University, Atlanta, GA. Retrieved from http://digitalarchive.gsu.edu/managerialsci_diss/10Paper10
  18. Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (1999). Organizing for high reliability: Processes of collective mindfulness. Research in Organizational Behavior, 21, 2381. doi: 10.1177/0020764009106599.Google Scholar
  19. Wilson, T. D. (2011). Redirect: The surprising new science of psychological change. London: Little Brown and Company.Google Scholar
  20. Woodside, A. G. (2011a). Case study research: Theory, methods, practice. Bingley: Emerald Group Publishing, Ltd.Google Scholar
  21. Woodside, A. G. (2011b). Responding to the severe limitations of cross-sectional surveys: Commenting on Rong and Wilkinson’s perspectives. Australasian Marketing Journal, 19(3), 153–156. doi: 10.1016/j.ausmj.2011.04.004.CrossRefGoogle Scholar
  22. Woodside, A. G. (2013). Moving beyond multiple regression analysis to algorithms: Calling of adoption of a paradigm shift from symmetric to asymmetric thinking in data analysis and crafting theory. Journal of Business Research, 66, 463–472. doi: 10.1016/j.jbusres.2012.12.02.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Arch Woodside
    • 1
  • Rouxelle de Villiers
    • 2
  • Roger Marshall
    • 3
  1. 1.Boston CollegeChestnut HillUSA
  2. 2.Department of MarketingUniversity of WaikatoHamiltonNew Zealand
  3. 3.Department of Marketing, Advertising, Retailing & SalesAuckland University of TechnologyAucklandNew Zealand

Personalised recommendations