Journal of Information Technology

, Volume 26, Issue 3, pp 205–219 | Cite as

Customer relationship management and firm performance

  • Tim Coltman
  • Timothy M Devinney
  • David F Midgley
Research Article

Abstract

In this paper, we examine the impact of customer relationship management (CRM) on firm performance using a hierarchical construct model. Following the resource-based view of the firm, strategic CRM is conceptualized as an endogenously determined function of the organization's ability to harness and orchestrate lower-order capabilities that comprise physical assets, such as IT infrastructure, and organizational capabilities, such as human analytics (HA) and business architecture (BA). Our results reveal a positive and significant path between a superior CRM capability and firm performance. In turn, superior CRM capability is positively associated with HA and BA. However, our results suggest that the impact of IT infrastructure on superior CRM capability is indirect and fully mediated by HA and BA. We also find that CRM initiatives jointly emphasizing customer intimacy and cost reduction outperform those taking a less balanced approach. Overall, this paper helps explain why some CRM programs are more successful than others and what capabilities are required to support success.

Keywords

customer relationship management strategic IT capabilities performance 

Introduction

Customer relationship management (CRM) is increasingly important to firms as they seek to improve their profits through longer-term relationships with customers. In recent years, many have invested heavily in information technology (IT) assets to better manage their interactions with customers before, during and after purchase (Bohling et al., 2006). Yet, measurable returns from IT investment programs rarely arise from a narrow concentration on IT alone, with the most successful programs combining technology with the effective organization of people and their skills (Bharadwaj, 2000; Piccoli and Ives, 2005). It follows that the greater the knowledge about how firms successfully build and combine their technological and organizational capabilities, the greater will be our understanding of how CRM influences performance.

Although the market for CRM software and support is strong (Maoz et al., 2007), there remains considerable skepticism on the part of business commentators and academics as to its ultimate value to the corporation and customers. Surveys of IT executives in the business press report that CRM is an overhyped technology (e.g., Bligh and Turk, 2004) and some academics claim the concept is fundamentally flawed because CRM ignores the reality that many customers do not want to engage in relationships (Dowling, 2002; Danaher et al., 2008).

Empirical studies examining the success of CRM technology have failed to alleviate this skepticism as investigations to date span a limited range of activities (Sutton and Klein, 2003), and are noticeably silent on the extent to which CRM investment contributes to firm performance (Boulding et al., 2005). A lack of clear and generalizable empirical support for the expected return from CRM investments has important practical implications for market development and firm profitability. It also raises questions regarding the most appropriate mix of capabilities to effectively exploit investment in CRM. This discussion motivates the two research questions this paper seeks to answer.

  1. 1

    Is there evidence that CRM matters? Put more empirically: does CRM contribute to higher firm performance based on standard measures understood by managers?

     
  2. 2

    Given there is a CRM–performance relationship, what lower- and higher-order capabilities are critical to develop and maintain superior CRM? In other words: what is the structural capability path to improved performance?

     

From a practical and empirical perspective, there are important conceptual and analytic issues in addressing these questions that must be taken into consideration when we attempt to measure capabilities. One school of thought holds that a holistic or aggregate representation is necessary when we examine complex phenomena such as IT (e.g., Swanson and Ramiller, 1997). Others favor a more disaggregate line of empirical analysis; as exemplified by Ray et al. (2005: 626), who state that the ‘impact of IT should be assessed where the first-order effects are expected to be realized.’

This contrast of views presents a dilemma for IT researchers who want: (1) the breadth, comprehensiveness and generalizability of a multidimensional construct to better represent the interdependent nature of IT; and (2) the clarity and precision associated with an examination of the role of specific IT resources that underlie the construct. Our position is that any debate over the degree of aggregation is best resolved empirically. For example, it is possible to combine higher-order multidimensional constructs and their lower-order dimensions within a single analytic framework. Such frameworks allow researchers to identify the respective role of higher- and lower-order dimensions empirically. Unfortunately, such frameworks have received little attention in the IT literature to date (see Wetzels et al. (2009) for a recent exception).

CRM represents a singularly good example of a higher-order construct or meta-capability that is underpinned by specific technological, organizational and human capabilities. In this paper, we measure CRM as an endogenously determined function of the firm's ability to harness and orchestrate lower-order capabilities. Three lower-order capabilities – drawn from the strategy, IT and marketing literatures – provide the basis for our measure of a superior CRM capability. These are: (1) IT infrastructure; (2) human analytics (HA); and (3) business architecture (BA). The first of these capabilities represents the technology, while the other two encapsulate the company's organizational capabilities that complement the technology. This broad approach is common to work regarding what constitutes CRM capabilities (Leonard, 1998; Day, 2003; Tippins and Sohi, 2003).

Furthermore, by accounting for the strategic objectives of the firm, we are able to address the fact that organizations are heterogeneous and will subsume their CRM activities within an overarching strategic imperative. We show that CRM investments can be understood better by accounting for the degree to which firms view CRM as a mechanism aimed at reducing customer management costs or increasing customer intimacy. This approach is consistent with Aral and Weill's (2007: 764) finding that ‘particular IT asset classes deliver higher performance only along dimensions consistent with the strategic purpose of the asset.’

In terms of practice, the present study offers managers seeking to invest in CRM a fresh insight into what it means to be ‘IT savvy.’ Weill and Aral (2006: 40) define this colloquial term as ‘the set of interlocking business practices and competencies that collectively derive superior value from IT investments.’ Our findings imply that CRM has the greatest impact on firm performance when IT resources are combined with organizational capabilities and the firm sets objectives for its CRM initiatives that jointly emphasize customer intimacy and cost reduction.

The balance of the paper is organized as follows. The next section outlines the theoretical background to our work and presents the research model and hypotheses. The ensuing section discusses the research methodology and presents the specific measures used to test our model. A section on data analysis and results precedes the final section, which lays out our main conclusions and the implications of this work for both scholarship and practice.

Theoretical background, research model and hypotheses

Prior research in strategy and management has observed that the degree to which a firm will prosper is, in part, dependent upon the extent to which they possess capabilities and resources that can be employed to enhance the competitiveness of the business. Considerable empirical work in IT has sought to examine the direct connection between investment in IT and firm performance. However, the findings from this work have been mixed. Some (Weill, 1992; Powell and Dent-Medcalfe, 1997; Mendelson and Pillai, 1998) report a negative relationship between IT investment and aspects of firm success, while others have demonstrated a positive relationship between IT investment and firm performance. The lack of consistency in these findings is independent of whether performance is defined as financial (Devaraj and Kohli, 2003), productivity driven (Markus and Robey, 1988), process-related (Ray et al., 2005) or the degree of organizational learning (Tippins and Sohi, 2003). Although this research provides evidence of a general relationship, our knowledge of the specific IT infrastructure and organizational factors driving these general results remains limited.

The value of IT to the firm is clearly a complex issue because firms apply IT in manifestly different ways (Kohli and Gover, 2008). Moreover, investment in IT infrastructure enables higher-order business capabilities, which in turn, is having a critical impact on the way business is organized and conducted, but may not immediately appear to be related to that IT investment. For example, Mithas et al. (in press) demonstrate empirically that the ability of firms to provide accurate, timely and reliable data and information to users – what they refer to as a higher-order ‘information management capability’ – is based on an ability to leverage IT infrastructure. Hence, it can be difficult to capture and properly attribute the direct or indirect value generated from investment in IT.

In this paper, we use the resource-centred perspective as the conceptual basis for our model, hypotheses and measures. This perspective has been widely used to assess the strategic value of IT based on the differential qualities of resources, capabilities and work processes (Brynjolfsson and Hitt, 1996; Melville et al., 2004; Ray et al., 2005; Mishra et al., 2007; Oh and Pinsonneault, 2007). Oh and Pinsonneault (2007) divide the resource-centered perspective into two streams: the production function view and the more traditional resource-based view (RBV). The production function view (Dewan and Min, 1997) focuses on explaining variation in firm performance by reference to a collection of production resources (e.g., IT capital) and capabilities (e.g., labor). Although studies in this stream have reported positive relationships between the size of IT investment and organizational performance (e.g., Brynjolfsson and Hitt, 1996), IT investment is generally regarded as a necessary but not sufficient factor in explaining organizational performance (Bharadwaj et al., 1999). In contrast, the traditional RBV literature places greater emphasis on the firm's ability to coordinate tasks, utilizing organizational resources and capabilities to achieve a particular end result. According to Helfat et al. (2007: 4), the ‘resource base’ of an organization includes ‘tangible, intangible, and human assets (resources) as well as capabilities which the organization owns, controls, or has access to on a preferential basis.’ As this use of the term ‘resource base’ implies, we consider capabilities to be ‘resources’ for the purposes of this research.

The broader resource-centered perspective is well suited to the assessment of IT investment because it emphasizes the possibilities and options that IT creates and, more importantly, the way firms make the best use of IT resources (Melville et al., 2004). Although aspects of IT can be ubiquitous, it is the combination of human skills and organizational context that is important to harness the full potential of IT. This combination of capabilities is not evenly distributed between firms and has not been well developed in the theory (Wade and Hulland, 2004).

Conceptual model of CRM performance

CRM represents a strategy for creating value for both the firm and its customers through the appropriate use of technology, data and customer knowledge (Payne and Frow, 2005). This strategy requires focus, training, and investment in new technology and software to aid in the development of value-adding CRM systems. Hence, CRM brings together people, technology and organizational capabilities to ensure connectivity between the company, its customers and collaborating firms.

Several scholars have expressed concerns with the lack of empirical work on the specific IT resources or combination of capabilities that deliver most business value (Bhatt and Grover, 2005; Aral and Weill, 2007; Mithas et al., in press). Our conceptual model draws heavily on the strategy literature and the strategic necessity hypothesis in asserting that although IT is a necessary factor, it rarely, in-and-of-itself, generates sustainable performance advantages (Clemons and Row, 1991). In other words, the business value that is generated by IT is dependent upon the combination of complementary technical, organizational and human resources (Francalanci and Morabito, 2008). Figure 1 illustrates the proposed combination of lower- and meta-capabilities to explain hierarchically how CRM contributes to firm performance.
Figure 1

Model of CRM performance.

A general consensus regarding what constitutes lower-order CRM capabilities has begun to emerge in the strategy, IT and marketing literatures. For example, in a study of Chaparral Steel Corporation, Leonard (1998) found four distinct clusters of core technological capabilities: technical systems, human skills, managerial systems and values. Tippins and Sohi (2003) provide a consistent definition of IT competency as the body of technical knowledge about IT systems, the extent to which the firm uses IT, and the number of IT-related artefacts. In marketing, CRM capabilities have been defined based on: employee values, behaviors and mindsets; customer information availability, quality and depth; and the supporting organizational structures, incentives and controls (Day, 2003).

This foundational work in strategy, marketing and IT provides support for a nomological network of constructs that connects CRM to firm performance based on the three lower-order capabilities. The first is IT technology and infrastructure capabilities, representing the CRM technology that underpins the availability, quality and depth of customer information. The second is human analytic-based capabilities comprising the diverse skills and experience of employees that are necessary to interpret and use CRM data effectively. The third is the business architecture and structural capabilities that embody action in the form of incentives and controls for employee behavior that supports CRM. This conceptualization is similar to prior definitions of CRM in the marketing literature (e.g., Day, 2003) and complements work in IT that emphasizes this level of analysis (e.g., Ray et al., 2005). For brevity, these capabilities will be referred to as IT infrastructure (IT), human analytics and business architecture.

In addition, our model identifies a higher-order construct or meta-capability, superior CRM capability. This measures the contribution of each of the three lower-level capabilities (IT, HA and BA), while also combining the three into one overall construct in an empirically weighted manner. This construct parallels the way firms combine diverse resources to form lower-level capabilities, which are, in turn, combined and managed in the organization's overall capability to execute CRM. It is the extent to which this meta-capability is superior to that of competitors that will influence firm performance, Ceteris paribus.

Studies of IT value have also reported mixed results when investigating the question of whether firms are better off pursuing a strategic emphasis based on revenue growth, cost reduction or both (e.g., Mittal et al., 2005). The particular CRM strategic emphasis is germane to this study because CRM programs can focus on customer intimacy (i.e., relationship orientation, catering to individual customer service requirements, etc.), cost reduction, data analytics or a mix of all the three (Buttle, 2004). Strategic emphasis is included in our conceptual model because we expect differences across firms that will influence their overall performance. For our purposes, it is important to separate out the effects on performance of CRM strategy from those due to CRM capability.

Development of hypotheses

IT infrastructure

Rapid advances in hardware and software provide firms with a wide range of solutions designed to support CRM (e.g., SAP's CRM suite, Teradata's Enterprise Data Warehouse, etc.). The key IT components are the front office applications that support sales, marketing and service, a data repository that supports collection of customer data, and back office applications that help integrate and analyze the data (Greenberg, 2001). In the case of CRM, business value is unlikely to exist in the technology alone but rather in the capability to draw information from all customer touch-points – including websites, telesales, service departments, direct sales forces and channel partners. The capability to build a coherent picture of the customer is costly for firms to imitate and, in many cases, highly idiosyncratic to the firm. This is critical because recent work demonstrates that firms working with incomplete customer data and imprecise metrics for evaluating customers run the risk of alienating, rather than satisfying, customers (Boulding et al., 2005) and, as a consequence, experience lower profitability (Ryals, 2005).

The stance taken here is that IT infrastructure on its own is well known, mostly stable and widely shared among competing firms; a fact reinforced by various literature. Hence, IT alone is unlikely to be a source of direct competitive advantage (Weill and Vitale, 2002; Carr, 2003, 2004). Rather, the scarce resources and subsequent source of business value are the managerial capabilities that are enabled by the technology (Bharadwaj, 2000; Piccoli and Ives, 2005). When IT systems become embedded in the firm's BA and human skills, capabilities can emerge that lead to a level of causal ambiguity and structural complexity that competitors find hard to imitate, thereby enhancing the firm's potential for sustainable competitive advantage (Dierickx and Cool, 1989).

A number of studies have demonstrated that complementary organizational and human resources mediate the impact of IT on firm performance. For example, Francalanci and Morabito (2008) identify that the link between information systems and firm performance is mediated by the absorptive capacity of the firm. Brynjolfsson and Hitt (1996) argue that the business value from IT is only generated when the IT is absorbed within the firm, as a routinized element of a company's value chain. Ray et al. (2005) also provide empirical evidence that performance improvements derive not from IT expenditure alone but when firms use embedded IT to support customer service processes (Ray et al., 2005).

Where IT infrastructure includes embedded hardware and software, we propose: (1) this infrastructure can support human and organizational capabilities; and (2) the impact of this infrastructure on CRM capability is at least partially mediated by these human and organizational capabilities: This leads to the following three hypotheses:

Hypothesis 1a:

  • More developed IT infrastructure (IT) is positively associated with more developed human analytic capabilities.

Hypothesis 1b:

  • More developed IT infrastructure (IT) is positively associated with more developed customer-oriented BA.

Hypothesis 1c:

  • More developed IT infrastructure (IT) is positively associated with a CRM capability that is superior to competitors.

Human analytics

In the case of CRM, it is unreasonable to expect that an IT capability alone is sufficient to generate performance outcomes. Customer data needs to be interpreted correctly within the context of the business, informing the decision-making process sufficiently that good decisions emerge. In this respect, the skills and know-how that employees possess in converting data to customer knowledge is also crucial to success. For example, managers must increasingly cope with vast amounts of rapidly changing and often conflicting market information. While analytic algorithms and data mining techniques can assist this, making sense of such data often requires human judgment.

Viewed from the resource perspective, this human ability: (1) enables companies to manage the technical and business risks associated with their investment in CRM programs (Bharadwaj, 2000); (2) is based on accumulated experience that takes time to develop; and (3) results from socially complex processes that require investment in a cycle of learning and knowledge codification. This makes it difficult for competitors to know which aspects of a rival's know-how and/or interpersonal relationships make them truly effective (Mata et al., 1995). Although it may be possible for competitors to develop similar skills and experience, it takes considerable time for these capabilities to mature (Lado and Wilson, 1994).

Building on the resource-centered perspective, the knowledge-based view (Grant, 1996) emphasizes that humans with unique abilities to convert data into wisdom can create competitive advantages that enhance firm performance. In the context of customer relationships, such knowledge may include the experience and skills of employees, the models they develop to analyze data, procedures and policies they derive to manage these relationships, and so forth. Overall, the knowledge-based view allows us to derive the following hypothesis:

Hypothesis 2:

  • More developed HA in converting data to customer knowledge is positively associated with a CRM capability that is superior to competitors.

Business architecture

Possession of sophisticated CRM systems, and complex human skills and experience will have little impact on the business unless action is taken. In other words, to improve performance the outputs of any CRM program have to be deployed at scale across the business. Many firms will own the same basic technology and possess similar skills. However, few will possess the organizational architecture of control systems and incentive policies required to fully exploit these resources (Barney and Mackey, 2005). This ability to exploit investment in CRM is observed in an overall BA that supports action before, during and after implementation. It not only ensures that customer knowledge is effectively generated, but more importantly, it ensures that the information is used within the organization to influence competitive advantage. For example, front-line employees are motivated to act on reports generated by the CRM system when making tactical decisions about customers. In the context of CRM, other aspects of this architecture could include training in systems and policies, or control systems that focus on a relationship rather than a transactional view of the customer. Following this line of reasoning we hypothesize that:

Hypothesis 3:

  • More developed customer-oriented BA is positively associated with a CRM capability that is superior to competitors.

The effect of a higher-order CRM capability on performance

There is a temptation to be normative about the pursuit of competitive advantage by directing attention and resources to each of these lower-level CRM capabilities. However, well-developed IT, HA and BA capabilities in isolation are insufficient to generate competitive superiority. Indeed, they confer competitive advantage only to the extent that the managers of the firm can leverage their interrelationships and produce a combination that is superior to that of their competitors (Wade and Hulland, 2004). Amit and Shoemaker (1993) define such second-order or meta-capabilities as the firm's overall ability to combine efficiently a number of resources that engage in productive activity. In other words, the lower-order capabilities such as IT, HA and BA are necessary, but not sufficient, to improve firm performance relative to competitors. Accordingly, we hypothesize that:

Hypothesis 4:

  • Better performing organizations are characterized by a superior combination of IT, HA and BA, resulting in a superior meta-capability of CRM.

The role of strategic emphasis in CRM

According to Bharadwaj et al. (1999: 1020), ‘firms benefit unequally from their different IT investments. Thus it would be interesting to examine the impact of different types of IT investments such as innovative versus non-innovative, strategic versus non-strategic, and internally focused (e.g., process control, coordination etc.) and externally focused investments (customer satisfaction, relationship management, etc.) … .’ In other words, context matters in IT research and studies of IT business value should not simply treat IT as an aggregate, uniform asset.

For example, firms with cost leadership strategies will likely allocate investments towards transactional IT applications were cost reductions are expected. Similarly, organizations pursuing revenue growth and customer intimacy are likely to invest in IT that supports innovation such as: (1) new value propositions; (2) new channels to the customer; and (3) better management of customer segments. It has also been shown that IT can help firms to reduce operational, transactional and marketing costs. In some cases, evidence suggests that firms that focus on either cost reduction or innovation outperform those that focus on both (Aral and Weill, 2007). In other cases, evidence indicates that firms are better off when a dual emphasis on both revenue growth and cost reduction is deployed (Mittal et al., 2005).

If there is a consensus in this research, it is that investments in IT are frequently designed to serve different strategic objectives, with some firms targeting efficiency gains through cost reduction while others target sales growth through customer satisfaction and retention strategies (Ross and Beath, 2002). However, the empirical findings remain mixed as to which strategy is the better, or more dominant option (Mittal et al., 2005). It follows that failure to account for strategic heterogeneity will weaken our ability to predict the investment-to-performance link.

In the case of CRM, two specific and potentially independent strategic points of emphasis are relevant. First, the firm may be seeking to build and enhance longer-term customer relationships, independent of the cost of doing so. Second, the firm may be attempting to be more cost efficient in maintaining these relations, whether through better data collection and analysis, automation of customer-facing processes or the targeting of marketing campaigns.

Evidence suggests that firms see CRM as part of a revenue enhancement strategy, part of a cost reduction strategy or some combination of the two (Payne and Frow, 2005). Along these lines Iriana and Buttle (2006) suggest that there are three possible approaches to CRM: (1) a top-down strategy of customer intimacy to support relationship building through more individualized offers; (2) automation of customer-facing processes to capture cost savings; and (3) a bottom-up approach that focuses on the analysis of data to enhance customer understanding, enable appropriate cross-selling attempts or the better targeting of offers, and so forth. They label these three approaches: strategic, operational and analytic CRM. Consistent with our prior discussion, it is plausible that firms pursue some combination of strategic, operational and analytic CRM to achieve their goals. Such combinations, being reliant on different lower-order capabilities, may also be difficult to imitate, and thus also serve as a source of competitive advantage.

It is important, therefore, to distinguish between the effects on performance due to the CRM meta-capability and those due to the firm's strategic emphasis. Furthermore, it is notable that strategic CRM places greater emphasis on customer value through relationship building and service customization in order to enhance revenues. Operational CRM has a clear cost imperative. Although analytic CRM can enhance revenues, it typically fits more into the cost reduction approach. This is because its main point of emphasis is on replacing a mass approach to marketing with more targeted, and thus less costly, campaigns. Increasing revenues while lowering costs would clearly have the biggest impact on firm profitability. Accordingly, and building on Mittal et al. (2005), we hypothesize that:

Hypothesis 5:

  • A dual strategic emphasis on enhancing revenue while reducing costs will have the greatest positive effect on firm performance, and this effect will be distinct from that of CRM capability.

Research method and measures

Sample characteristics, unit of analysis and data collection

We tested our hypotheses on a cross-sectional sample of business-to-consumer firms based in Australia. This sample was drawn from industry sectors displaying a strong commitment to CRM through high penetration of senior CRM appointments, loyalty programs and database marketing managers (Marketing UK, 2003). They include financial services, airlines, direct insurers, telecommunication utilities, hotels and casinos, and retail companies. The firms selected, thus, share common features in their application of CRM, which makes them suitable to test our hypotheses. They are all moderate to heavy users of CRM, have large numbers of customers, and operate in markets that favor differentiation from competitors in order to achieve their objectives. As our research focus is on differential CRM performance within firms operating on a competitive scale, our data collection targets firms using CRM extensively and is not meant to be representative of all firms.

Our approach is based upon key informants with the firms studied. We identified a competent key informant as: a marketing or sales director, chief information officer, chief financial officer or management executive typically at the general manager level in a strategic business unit. In addition to being well informed on CRM initiatives, such informants are also able to compare their own unit to direct competitors. This is important in order to be able to identify both superior capabilities and performance. Furthermore, the business unit, rather than the firm, is the appropriate unit of analysis because the way CRM is implemented in one unit of a firm can differ from another. For example, CRM in Corporate and Institutional Banking will be different from CRM in Retail Banking.

Respondents were randomly sourced from a commercial contact list. Ninety-seven executives responded to our survey questionnaire, yielding a 21 percent response rate. Eliminating responses with missing data, firms without CRM programs, and one government organization identified as an outlier in standard tests, left 86 respondents across 50 organizations with significant CRM programs. These organizations were primarily traditional users of CRM; half were in banking and insurance (25 firms), followed by IT products and services (6 firms), the hotel and travel industry (5 firms), telecommunications (4 firms), and various other service industries (10 firms). One business unit responded from each firm, with follow-up calls indicating that this unit was the one most involved in CRM. The median business unit in our data had 160 employees and the average unit 1440.

Research has found that multiple informants from the same business unit will reduce the amount of systematic error and yield response data that are superior to single informant reports (Van Bruggen et al., 2002). This is critical for several reasons. First, recent studies in IS have shown that systematic errors can account for more than half of the variance in observed correlations (Woszczynski and Whitman, 2004; Sharma et al., forthcoming). Second, bounded rationality implies that respondents in the same business unit will differ in their assessment of the efficiency and effectiveness of particular capabilities. This is not surprising, because as the theory suggests, process capabilities need to be hard to observe, to ensure that they are hard for competitors to imitate or buy. Therefore, we focus on depth as opposed to breadth in this study and our survey collected multiple responses from each business unit, with a mode of two and maximum of four key informants. Averaging the responses of each business unit's informants provides a better estimate of that business unit's true score (Kumar et al., 1993; Van Bruggen et al., 2002). Our database therefore has 50 rows, where each row represents the average response from each business unit.

Sample size and statistical power

When working with small sample sizes, Marcoulides and Saunders (2006: vi) recommend that a researcher should consider ‘the distributional characteristics of the data, potential for missing data, the psychometric properties of the variables examined, and the magnitude of the relationships considered before deciding on an appropriate sample size to use or to ensure that a sufficient sample is actually available to study the phenomenon of interest.’ First, our sample distribution includes the majority of the population of firms, which are the major users of CRM in their respective industries. This provides confidence that the sample is sufficiently representative of the population strata to support hypothesis testing. Second, the psychometric properties of the variables are all well established in the literature to support the nomological network that underpins this research. Third, we expect strong effect sizes and high reliability. This expectation is based on CRM consulting reports indicating large differences between ‘best-in-class’ and more typical firms (e.g., Aberdeen Group, 2007), and the composite reliability statistics for our measures. In the section ‘Analysis and Results’, we report various statistics and conduct post hoc power tests. We find that N=50 firms can be justified, given our theory, accuracy of measurement, effect sizes and achieved power.

Measures

The survey questionnaire contained items to measure all the constructs and controls in our model, together with definitions for each of the various capabilities, and descriptive items on the respondent and company. Most questions used 5-point or 7-point Likert or semantic differential scales. In those cases where the directionality was reversed to reduce response bias, the results are presented here in a manner that ensures that directionality is consistent and logical. The questionnaire items and descriptive statistics for these data are shown in Table 1. The full questionnaire is available from the authors upon request.
Table 1

Questionnaire items, descriptive statistics and measurement model results for multi-item constructs

Construct and item measures

PLS loading

Bootstrap t-statistic

Composite reliability

AVE (%)

Performance (5-point scale)

  

0.85

58

Relative to the highest performer in your industry, how your business has performed over the last 3 years?

 Return on investment (after tax)

0.79

7.6

  

 Success at generating revenues from new products

0.76

7.2

  

 Reduction in cost of transacting with customers

0.79

7.1

  

 Level of repeat business with valuable customers

0.70

4.4

  

Superior CRM capability (7-point scale)

  

0.84

63

Compared to your direct competitors,how do you rate your organization overall?

 Skills and experience at converting data to customer knowledge

0.83

10.5

  

 Customer information infrastructure

0.75

5.0

  

 Organizational architecture (i.e., alignment of incentives, customer strategy and structure)

0.81

9.1

  

Human analytic capability (5-point scale)

  

0.87

62

 To assist staff in extracting, manipulating, analyzing and presenting data in your organization, we have extensive documentation and procedures

0.82

16.7

  

 Sophisticated models are frequently used to analyze customer data

0.83

18.0

  

 We have formal procedures for cross-selling and up-selling to customers

0.77

10.2

  

 When extracting data from CRM systems and databases, most people involved have extensive knowledge of the business issues facing our firm

0.74

9.9

  

IT infrastructure capability (5-point scale)

  

0.83

56

 Our relational databases or data warehouse provides a full picture of individual customer histories, purchasing activity and problems

0.87

11.0

  

 When interacting with our organization, customers see one seamless face

0.61

3.0

  

 CRM software allows us to differentiate among customer profitability

0.79

8.8

  

 We are very good at adapting our IT applications and responding to unplanned customer demands

0.69

4.9

  

Business architecture capability (5-point scale)

  

0.76

51

 To what extent are employee/management incentives used in your organization to support customer relationship building?

0.71

4.8

  

 Investment in training and other resources to support CRM-related initiatives has been extensive

0.79

8.6

  

 We take a long-term view to the formation of customer relationships

0.64

4.4

  

CRM strategic emphasis (single item)

 Log of the ratio of the percentage emphasis placed on customer intimacy to that placed on all other goals

N/A

N/A

  

Controls

 Log of number of employees, log of the number of customers

N/A

N/A

  

Note: NA – Not applicable.

Dependent variable and control variables

Performance was measured using subjective assessments of the business unit's performance relative to other competitors in the same industry along four dimensions: return on investment, success at generating revenue from new products, reduction in the cost of transacting with customers and level of repeat business with valuable customers. To overcome these problems of short-term fluctuations in performance, the respondents were asked to evaluate the relative competitive performance over the ‘last 3 years.’ It should be noted that this definition of performance is one relevant to our domain of interest, CRM, and to testing the validity of our theoretical model. These four dimensions represent the performance outcomes that the literature expects to see from successful CRM initiatives (e.g., Payne and Frow, 2005; Iriana and Buttle, 2006).

Since performance can also be influenced by firm size, we included two control variables to account for this and thus better distinguish the effects of our theoretical constructs. Firm size was operationalized both as the number of customers and the number of employees (Amburgey and Rao, 1996). The distributions of the raw data for these two control variables were skewed, as is usually the case with size data. Marcoulides and Saunders (2006) note that departure from normality is a problem for small samples and so we used natural log transformations of these data in our analyses. We did not include other standard controls such as industry sector because our performance measure is relative to competitors in the same industry.

Independent variables

To capture the lower-level capabilities of human analytics, IT infrastructure and business architecture, we developed three sets of measures (scales). For HA, we took four scale items from Davenport et al. (2001) that capture the human processes and procedures used to extract raw data and convert them into customer knowledge. These items were based on the key competencies that a firm must develop to build strong analytic capabilities and include: (1) technology skills; (2) statistical modeling and analytic skills; (3) knowledge of the data; (4) knowledge of the business; and (5) communication skills. For the IT infrastructure scale, we used four items from the IT (Bharadwaj, 2000) and marketing literatures (Reinartz et al., 2004) that place strong emphasis on the effectiveness of the integrated IT infrastructure and its ability to generate an accurate picture of the customer. For the business architecture scale, we adapted three items from Day and Van den Bulte (2002) capturing the business influence that incentives, training and culture play in converting customer knowledge into action.

To develop the second-order construct, superior CRM capability, we used an approach similar to Marchand et al.'s (2000) concept of information orientation and Day and Van den Bulte's (2002) concept of customer relating capability. In this case, respondents were asked to compare their overall capability on, for example, HA, directly with their competitors. The question posed was: ‘Compared to your direct competitors, how do you rate your organization's overall skills and experience at converting data to customer knowledge?’ This was repeated for each of the three capabilities. This procedure allowed us to measure superior CRM capability as an empirically weighted composite of these three overall comparisons, as well as to investigate the relationships between this composite and the three lower-level scales discussed above. This dual measurement approach at the higher and lower levels also allowed the structural equation model to be identified for the purposes of estimation. Hence, our measurement approach corresponds to the multiple-indicators, multiple causes or Multiple Indicators and Multiple Causes (MIMIC) model (Jarvis et al., 2003) and provides a useful alternative to the repeated indicator approach that is also used to measure higher-order constructs (Wetzels et al., 2009).

The strategic emphasis construct was measured by asking respondents to allocate 100 points across customer intimacy, operational excellence and analytical objectives for their CRM program and according to their relative importance. Our approach here is similar to the measurement of IT governance proposed by Weill and Ross (2005). They argue that governance performance objectives within the business unit should be weighted by their relative importance. The same approach is used here but we exclude analytical objectives because few firms in our sample emphasized this objective. Rather, these firms placed an emphasis on customer intimacy (revenue enhancement), operational excellence (cost reduction) or some balance between the two. Given this finding, these data were transformed into a single-item measure, namely the ratio of the emphasis placed on customer intimacy to that placed on other objectives. As this ratio also showed a skewed distribution, we used the natural log transformation in our analyses.

Analysis and results

A two-step approach to data analysis was performed that included: (1) a detailed assessment of the measurement model; and (2) estimation of the structural equation model and hypothesis tests.

Assessment of the measurement model

To ensure the validity of all measures, we examined key informant bias, non-response bias, common method bias and convergent and discriminant validity. We also examined the correlation between our subjective measure of performance and objective performance data when available.

To measure the impact of key informant bias, t-tests were used to examine differences of opinion between top (n=37) and middle management (n=49) on several variables (including performance). No significant differences were detected. Similarly, to test for non-response bias, we used the extrapolation procedure proposed by Armstrong and Overton (1977). No systematic differences existed between early and late respondents, suggesting that this bias was not a major concern. We also note our sample is a large proportion of the universe of interest, giving additional confidence that non-response bias is not of concern.

Two approaches were used to examine common method bias and one to reduce it. First, multiple responses were received from the business units studied. This allowed us to compare measures of the independent variables – made by a particular respondent – with a measure of the dependent variable formed from an average of all the responses from that business unit. There was little difference between the coefficients of a model estimated from such data and those reported here, indicating that there was no general factor in these data that might be associated with common method bias. Second, we also used the more traditional Harmon's ex post one-factor test to assess common method bias (Podsakoff and Organ, 1986). The results of this test indicated that we needed seven distinct factors to explain 78 percent of the variance in the total set of 21 items. Again, the lack of a dominant single factor suggested that common factor bias was probably not an issue. However, as Podasakoff et al. (2003) note, the one-factor test is relatively insensitive and they strongly recommend designing the questionnaire itself to reduce common method bias, albeit injecting a note of caution that scale validity should not be sacrificed for the sake of reducing this bias. Here, the scale items for strategic emphasis, the three CRM capabilities and performance were separated from each other by blocks of questions relating to other constructs not part of this study. Within the blocks relating to the modeled constructs some items had the directionality of their scales reversed to encourage careful answering. Finally, strategic emphasis, the three CRM capabilities and performance were measured with different scale formats (100-point allocation for emphasis, 5-point semantic differentials for the first-order CRM capabilities, 7-point comparisons to direct competitors for the three items identifying the second-order CRM construct and 5-point comparisons to a named industry leader for performance). As Podasakoff et al. (2003) note, all these steps should help reduce common method bias.

Preliminary scale development followed Churchill's (1979) procedure with its emphasis on exploratory factor analysis and internal consistency. Exploratory factor analyses of the underlying questionnaire items indicated one strong dimension for each construct, making it legitimate to regard them as unitary constructs and compute reliabilities. The five constructs based on multi-item measures had composite reliabilities greater than the acceptable threshold of 0.70. These are reported in Table 1. The table also contains the loadings and bootstrap t-statistics for each item and the average variance extracted (AVE). The lowest loading was 0.61, with 15 of the 18 loadings above the norm of 0.70. The lowest t-statistic was 3.0, with 13 of the 18 being above 5, indicating very stable estimates. In all cases, the AVE was above the norm of 50 percent. Overall, our measures have acceptable convergent validity.

We assessed discriminant validity by comparing the correlation between latent constructs and the square root of the AVE for each (Fornell and Larcker, 1981). The correlation matrix in Table 2 shows that these square roots – shown on the diagonal – are greater than the corresponding off-diagonal elements. Thus, it is possible to conclude that each measure is tapping a distinct and different construct. For completeness, Table 2 also includes the single-item construct of strategic emphasis, together with the two control variables.
Table 2

Correlation of latent constructs (diagonal elements are square roots of average variance extracted)

 

1

2

3

4

5

6

7

1. Human knowledge capability

0.79

      

2. IT infrastructure capability

0.58

0.75

     

3. Business architecture capability

0.61

0.55

0.72

    

4. Superior CRM capability

0.59

0.49

0.61

0.80

   

5. Performance

0.36

0.37

0.39

0.46

0.78

  

6. CRM strategic emphasisa

−0.13

−0.11

−0.02

−0.11

−0.18

1.00

 

7. Control: number of customersa

0.01

−0.03

−0.08

−0.05

−0.23

−0.41

1.00

8. Control: number of employeesa

0.23

0.01

0.13

0.32

0.23

−0.13

0.23

aLog transformed to reduce skewness.

Note: Diagonal elements in bold are square roots of average variance extracted.

Despite the potential for reporting biases, research has shown that self-reported performance data are generally reliable (e.g., Dess and Robinson, 1984; Fryxell and Wang, 1994). We did our own validation comparing the self-reported measures with objective measures of financial performance obtained from a commercially available database. The objective measures included profit and sales revenue – common accounting-based measures – and Economic Value Added (EVA) – a common market-based measure. We obtained these data for half of the firms in our sample. The correlation between our subjective measure of ‘overall performance’ and the objective profit/revenue ratio was 0.28 (P<0.01). Significant correlations were also found between subjective measures of sales growth and profit/revenue ratio (0.31) and subjective measures of success generating revenue from new products and EVA (0.30). One issue is that these commercially available data are for the firm as a whole while the unit of analysis for our purposes is a business unit. Another is that our definition of performance is oriented to the specific impact of CRM initiatives, whereas the commercial data only looks at higher-level outcomes. Nevertheless, we observed significant correlations between the subjective and objective measures of performance. This gave us some added confidence in the validity of the measures.

The structural model

We tested the conceptual model shown in Figure 1 and its associated hypotheses using partial least squares (PLS). Here, we used the Smart PLS software to generate our estimates (Ringle et al., 2005). PLS relies on bootstrapping techniques to obtain t-statistics for the path coefficients and hypothesis tests. Following standard heuristics, we re-sampled 200 times to obtain these statistics and used the default construct-level alignment of samples.

PLS and sample size

Marcoulides and Saunders (2006) set out the following five steps for assessing the adequacy of data for PLS modeling, particularly data from small samples.

  1. 1

    Screen the data: Missing data, outliers and non-normally distributed variables can pose problems in PLS analyses of small samples. Here, we eliminated firms with missing data and one obvious outlier. Both graphical inspection and skewness and kurtosis statistics indicate that the variables for the remaining firms are normally distributed (after natural log transformations in the case of strategic emphasis and size controls).

     
  2. 2

    Examine the psychometric properties of all the variables in the model: Poorly measured variables can pose problems in small samples. However, as discussed previously, all our constructs appear well measured, showing more than adequate convergent and discriminant validity.

     
  3. 3

    Examine the magnitude of the relationships and effects between the variables in the model: If weak effects are expected and the variables are poorly measured, larger sample sizes will be needed to reject hypotheses. As noted, the variables used here are well measured and, as will be discussed in detail later, the observed effects are substantial. We are able to explain 46 percent and 33 percent of the variance in our two principal constructs, superior CRM capability and performance, respectively, and three of the five path coefficients relating to the hypotheses exceed 0.30.

     
  4. 4

    Examine the magnitude of the standard errors of the estimates considered in the proposed model and construct confidence intervals for the population parameters of interest: Unstable coefficients and wide confidence intervals can be a sign of inadequate sample size. Our use of bootstrapping reveals the majority of coefficients to be stable with narrow confidence intervals. In the outer (measurement) model, the bootstrap t-statistics range from 3 to 18 and in the inner (structural) model the t-statistics on the significant paths are all greater than the norm of 2.

     
  5. 5

    Assess and report the power of the study: We used the software G-Power 3.1 (Faul et al., 2007) to conduct a post hoc power test on the path coefficients associated with our hypotheses by excluding variables in sequence from the model. This identifies the variance that excluded variables account for independently, and after controlling for the variance explained by the other variables we retain in the model.

     

First, we examine the paths from strategic emphasis and superior CRM capability to business unit performance. The joint effect size is 0.16, and with alpha set to 0.05 and beta to 0.95, the actual power achieved in our study is 0.88 (controlling for the number of customers and employees). This achieved power is well above the commonly accepted norm of 0.80. However, we do not have adequate power to compare the relative importance of each construct with the other. The effect size for strategic emphasis on its own is 0.06 and for superior CRM capability 0.11, with power of 0.51 and 0.74, respectively.

Second, for the components of superior CRM capability a similar result holds. Human analytics and business architecture have a joint effect size of 0.21 and power of 0.94 (controlling for IT infrastructure); well above the commonly accepted norm. And here we can do some comparisons. Namely, it appears each of these constructs has an equal effect size (0.11 and 0.10, respectively), a conclusion reached with reasonable power (0.75 and 0.70, respectively).

Overall, these tests suggest we have adequate power to validate our model.

Effect of CRM on business unit performance

The main effects model (see Figure 2) reveals a number of interesting findings. First, although PLS does not have an overall index of model fit, the fact that the key constructs are well explained and most path coefficients are statistically greater than zero and in the predicted direction lends support to the model. The three lower-level capabilities explain 45 percent of the variance in the enterprise-level capability of superior CRM. In turn, this capability, along with strategic emphasis and the two controls, explains 33 percent of business unit performance. Forty-five percent and 33 percent are relatively high levels of explanation for a model from cross-sectional survey data (Chin, 1998).
Figure 2

Empirical model (structural model PLS path coefficients and bootstrap t-statistics).

Second, the paths from IT infrastructure to human analytic capability and business architecture capability are positive and significant (β=0.60, P<0.01 and β=0.54, P<0.01, respectively). Although the direct path between IT infrastructure and superior CRM capability is positive it is not significant (β=0.11, P=n/s), while the direct paths from both human analytic capability and business architecture capability to superior CRM capability are positive and significant (β=0.30, P<0.01 and β=0.36, P<0.01, respectively).

All together, these results suggest, as hypothesized, the effects of IT infrastructure on superior CRM capability are mediated through the capabilities of human analytics and business architecture. Indeed, our results indicate that IT effects are fully mediated by human and organizational capabilities. However, to test this full mediation hypothesis more thoroughly, we draw on a recent technical literature. This literature questions the well-known and widely applied Baron and Kenny (1986) tests for mediation while emphasizing the superiority of bootstrap procedures for statistical tests. Two conclusions from this literature are particularly relevant to our analysis (we refer readers to the cited papers for more details – in particular, Zhao et al. (2010) for a useful review).

The first conclusion relates to Baron and Kenny (1986). They set out three tests to establish mediation derived from three separate regressions. In their view mediation is established if: (1) a regression of the mediator on the dependent variable shows a significant effect; (2) a regression of the independent variable on the dependent variable – often called ‘the effect to be mediated’ – shows a significant effect; and (3) a regression in which both independent variable and mediator have a significant effect on the dependent variable. More recently, several authors have argued that the second test is not necessary and can be potentially misleading because it confounds the direct effect with the total effects of the model (e.g., Kenny et al., 1998; McKinnon et al., 2000). Indeed, their review of this and other related literature led Zhao et al. to conclude that to show mediation ‘all that matters is that the indirect effect is significant’ (2010: 204). Their conclusion is important here because the direct path between IT infrastructure and superior CRM capability is not significant, while the indirect paths through human analytics and business architecture are. In fact, our results correspond to Zhao et al.'s category of ‘indirect-only mediation’ (2010: 201), which also implies that because the direct effect is small or zero, there are unlikely to be any omitted mediating variables.

The second conclusion from this literature goes directly to the problem of showing the indirect effect is significant. Traditionally, and again following Barron and Kenny, the Sobel test has been used for this purpose. However, this test assumes normality, which has caused many authors to subsequently question its adequacy (Zhao et al., 2010). The indirect path involves the product of two coefficients whose sampling distribution is only normal for large samples and not for those typically seen in research studies. As an alternative Preacher and Hayes (2004) recommend a bootstrap test, particularly when the model involves the simultaneous test of more than one mediator, as it does here. Applying their methods via the SAS script they provide at www.comm.ohio-state.edu/ahayes and using the recommended 5000 bootstrap samples, we found that the 95 percent bootstrap confidence intervals for the total effects and those of human analytics and business architecture were all positive and did not include zero. Moreover, as found before, the direct effect of IT infrastructure was not significant. This test confirms indirect-only mediation and implies that although IT infrastructure does not have a significant direct effect on superior CRM capability, it does have a strong indirect effect. IT infrastructure therefore plays an important role in enabling staff to convert customer data into knowledge, and therefore supports the capabilities that underpin CRM and improve firm performance. Equally, IT infrastructure plays an important role in supporting customer-oriented incentives, training and goals within the business, and therefore similarly supports CRM and improved firm performance. Hence, both Hypotheses 1a and 1b are supported while Hypothesis 1c is rejected.

Consistent with our other hypotheses, superior CRM capability is driven primarily by human analytics and appropriate business architecture. These positive and significant path coefficients provide support for Hypotheses 2 and 3. As we argued in Hypothesis 4, individual capabilities are necessary but not sufficient for superior performance. What is required is the orchestration of individual capabilities – that do not individually need to be superior to the competition – into a higher-order capability that is superior to the competition. The results in Figure 2 are as theoretically expected. Superior CRM capability has a significant impact on performance (β=0.36, P<0.01), providing support for Hypothesis 4.

Finally, CRM strategic emphasis, or more specifically, the ratio of the emphasis placed on customer intimacy to cost reduction has a significant impact on performance (β=−0.27, P<0.05). The negative sign implies that an increasing focus on customer intimacy reduction detracts from performance. Figure 3 illustrates this graphically. The plot represents the estimated scores on the latent construct of performance against the quartiles of the distribution of the strategic emphasis ratio. Quartile 1 represents those business units that place their dominant emphasis on operational excellence (cost reduction), and Quartile 4 represents those that place their dominant emphasis on customer intimacy (revenue enhancement). As can be seen, both these groups perform relatively poorly. It is the business units with greater balance between revenue enhancement and cost reduction goals (Quartiles 2 and 3) that perform better. In particular, Quartile 3 – which has a 1:1 balance between the two – performs by far the best. Hence, Hypothesis 5 is supported. From the within-quartile means, we can see that the negative coefficient in the linear PLS regression is essentially a contrast between Quartiles 1–3 and Quartile 4 (business units that place very high emphasis on customer intimacy). In our data, overemphasizing the customer is detrimental to the bottom-line.
Figure 3

Performance and strategic emphasis.

Effect of control variables on performance

Of the two control variables on firm size, one is significant and worth discussing. This is the path from the (log transformed) number of customers to business unit performance. The path coefficient is negative and of magnitude 0.36 (P<0.01 in a two-tailed test), implying increasing numbers of customers are associated with weaker relative performance. One possible reason for this finding is that the sample is heavily skewed towards the financial services sector, where CRM has been widely embraced. The global banking meltdown has demonstrated that growth strategies are associated with considerable financial exposure. However, the extent to which this has been played in Australia is subject to debate. Australia has a strong banking system that was not subject to the same liquidity issues facing the US and European financial institutions. The Australian system is somewhat oligopolistic as it is segmented into the ‘Big 4,’ which dominates the sector both geographically and in terms of services and markets, and many smaller regional consumer-oriented banks that compete ferociously via face-to-face services. One can speculate that the larger firms are using CRM capability to maintain their control of the market oligopolistically rather than improve their position competitively. This variable is therefore worth including in future research on IT and firm performance. It had significant effects, whereas the more traditional measure – number of employees – did not; a fact that is consistent with the asset growth of the banks while downsizing was rampant.

Discussion and theoretical contributions

Organizations frequently assume that advances in IT infrastructure and software will not only generate an economic return but also serve to define a business and its competitive strategy (Bharadwaj, 2000; Santhanam and Hartono, 2003). This study makes three important contributions to understanding this basic supposition by addressing: (1) how to empirically measure the impact of IT; (2) the specific role that IT actually plays in supporting a CRM program; and (3) the contribution of CRM programs to firm performance. Each of these points is discussed in turn.

First, our study reveals that the contribution of IT to a CRM program is best measured as a higher-order combination of IT, human and business capabilities. This follows because CRM is embedded in a web of capabilities, none of which is superior alone, but when combined with appropriate resources and other capabilities in an organizing context, creates a higher-order capability that can make a significant contribution to firm performance. Put succinctly, few companies will master these socially complex capabilities effectively. And this is exactly why CRM capability is potentially a source of competitive advantage – it takes time and effort to develop, it is rare and difficult to imitate, and is causally ambiguous. This is the essence of the RBV of the firm (Newbert, 2007).

Second, the indirect contribution of IT to a superior CRM capability stands in contrast to what the sales people of companies such as Siebel, Oracle, SAP and SAS would like us to believe. Alone, IT offers no significant competitive advantage to the firm, but this does not negate its fundamental operational importance to CRM. IT is clearly necessary to automate customer touch-points, to combine data silos and to enable customer data interpretation. However, this aspect of IT is effectively commoditized and, alone, adds nothing to competitive advantage. Our findings validate existing ‘wisdom’ in the literature, where scholars have concluded that in order to be successful, organizations must combine IT with another capability (Powell and Dent-Medcalfe, 1997; Bharadwaj, 2000; Day, 2003; Piccoli and Ives, 2005).

The results also support Zuboff (1988), who claims that one of the primary reasons many organizations fail when implementing new forms of IT is because they simply do not have the requisite skills and experience necessary to use the available data. The specific human capabilities and business structures revealed in this study are critical to transform what is essentially a passive resource (i.e., IT-enabled customer data) into actionable decisions such as whether a customer is more or less important, whether an idea for a new product is attractive or marginal, and so on. In other words, firm performance is improved not through the simple possession of capabilities but because the firm makes better use of its capabilities.

Third, the survey results confirm that a higher-order ‘superior CRM capability’ is a robust indicator of firm performance. It provides greater theoretical parsimony and reduced model complexity and reinforces the finding that IT business value is represented in those behaviors manifested as a consequence of IT investment (Seddon, 1997). This is particularly important because although companies are under constant pressure to engage in a plethora of IT-based initiatives, few have the potential to use those initiatives to create positions of sustained measurable advantage. This crucial point that has not been well integrated theoretically by IT researchers, nor has it been incorporated in the measurement models used. For example, Bharadwaj (2000), Barua et al. (2004) and Ray et al. (2005) refer to a superior IT capability but measure IT capabilities independently without reference to the firm's competitors. Yet as a firm's performance is largely determined by its strengths and weaknesses relative to its competitors, unless one or more of the firm's capabilities is superior to the competition, it is unlikely to achieve better performance.

Finally, our results reveal that an optimal CRM strategy should jointly emphasize revenue growth and cost reduction. This is important in providing a consistency not seen in prior research. For example, Rust et al. (2002) stress that there can be conflict between a revenue expansion and cost reduction strategy, whereas Homburg et al. (2008) report that a dual strategic emphasis has a positive impact on customer profitability.

Managerial implications

There is a temptation for managers to be normative about the pursuit of competitive advantage and direct attention and resources toward particular CRM capabilities, mainly because it allows managers to simplify complex CRM implementation and concentrate their efforts on ‘getting it right,’ one capability at a time. This approach, however, would seem to be flawed, as well-developed technical, human and business capabilities in isolation are insufficient to generate competitive superiority. In the specific case of CRM, each capability is nested within an intricate organizational system of interrelated and interdependent resources.

By comparing capabilities relative to competitors, we offer benchmark data that show managers the necessary conditions for success. However, knowledge of what is required per se is not sufficient for success. For these capabilities to be exercized involves a series of judgments about the particular CRM strategic emphasis. An overemphasis on customer intimacy to the exclusion of operational efficiency and analytic orientations will actually diminish performance. This observation reaffirms a growing consensus that the context within which IT is applied is an important feature of overall performance (Ray et al., 2005). In other words, to start ‘dating’ customers with the promise of, but not the capability to efficiently fulfill, a genuine relationship, is a dangerous strategy; customers’ expectations are not met, staff become frustrated and executives are disappointed.

Limitations and direction for further research

This study has limitations that qualify our findings and present opportunities for future research. Although it is often argued that cross-sectional designs are justified in exploratory studies that seek to identify emerging theoretical perspectives, there is always the issue of capturing causality. Therefore, the results of this study should be viewed as preliminary evidence that the main constructs (i.e., CRM capabilities) influence performance. This echoes the now customary call for the use of longitudinal studies to corroborate cross-sectional findings and examine performance prior to and after a CRM program implementation.

Furthermore, researchers in IT acknowledge that despite considerable investigation, the nature of the complex relationship between IT infrastructure and organization performance remains only partially understood (Oh and Pinsonneault, 2007). ‘[C]ontext matters in MIS research’ (Carte and Russell, 2003: 480), and the lack of direct impact by IT infrastructure on CRM capability does not imply that IT does not matter. We expect that for many companies IT infrastructure is a strategic necessity where the benefits from IT infrastructure support other capabilities. In this paper, we demonstrate one example of this where IT infrastructure plays a critical role in supporting human analytic and BA capabilities. We expect that more examples of how IT supports other capabilities can be found and future research should seek to extend upon the work in this paper.

Finally, because our study is representative of large, high-performing organizations that use CRM as part of their strategy, one could reasonably argue that such organizations benefit through the reinvestment of profits enabling them to devote considerable resources to CRM programs, thereby reinforcing their success. Future work should seek to control for resource munificence (Klein, 1990). Equally, studies which contrast adopters and non-adopters of strategic CRM may also be informative.

Conclusion

CRM suffers when it is poorly understood, improperly applied, and incorrectly measured and managed. This study reveals the combination of investment commitments in human, technological and business capabilities required to create a superior CRM capability. The exact extent of these capabilities is ex ante indeterminate and should be guided by a strategic emphasis that combines customer intimacy and operational excellence. By integrating two schools of thought – capabilities and strategic emphasis – we build a more managerially relevant theory of CRM performance that shows why CRM programs can be successful and what capabilities are required to support success.

References

  1. Aberdeen Group (2007). Customer Value Management: Keeping profitable customers on board [www document] http://www.aberdeen.com/Research (accessed 17 April 2008).
  2. Amburgey, T.L. and Rao, H. (1996). Organizational Ecology: Past, present, and future directions, Academy of Management Journal 39 (5): 1265–1286.CrossRefGoogle Scholar
  3. Amit, R. and Shoemaker, P.J.H. (1993). Strategic Assets and Organizational Rent, Strategic Management Journal 14 (1): 33–46.CrossRefGoogle Scholar
  4. Aral, S. and Weill, P. (2007). IT Assets, Organizational Capabilities, and Firm Performance: How resource allocations and organizational differences explain performance variation, Organization Science 18 (5): 763–780.CrossRefGoogle Scholar
  5. Armstrong, J.S. and Overton, T.S. (1977). Estimating Nonresponse Bias in Mail Surveys, Journal of Marketing Research 16 (8): 396–402.CrossRefGoogle Scholar
  6. Barney, J.B. and Mackey, T.B. (2005). Testing Resource-based Theory, in D.J. Ketchen and D.D. Bergh (eds.) Research Methodology in Strategy and Management, Greenwich, CT: Elsevier, pp. 1–13.CrossRefGoogle Scholar
  7. Baron, R.M. and Kenny, D.A. (1986). The Moderator-mediator Variable Distinction in Social Psychological Research: Conceptual, strategic and statistical considerations, Journal of Personality and Social Psychology 51 (6): 1173–1182.CrossRefGoogle Scholar
  8. Barua, A., Konana, P., Whinston, A.B. and Yin, F. (2004). Empirical Investigation of Net-enabled Business Value, MIS Quarterly 28 (4): 585–621.Google Scholar
  9. Bharadwaj, A.S. (2000). A Resource-based Perspective on Information Technology Capability and Firm Performance: An empirical investigation, MIS Quarterly 24 (1): 169–196.CrossRefGoogle Scholar
  10. Bharadwaj, A.S., Sambamurthy, V. and Zmud, R.W. (1999). IT Capabilities: Theoretical perspectives and empirical operationalization, in Proceedings of the 20th International Conference on Information Systems, Charlotte, North Carolina: AIS Electronic Library, pp. 378–385.Google Scholar
  11. Bhatt, G.D. and Grover, V. (2005). Types of Information Technology Capabilities and Their Role in Competitive Advantage: An empirical study, Journal of Management Information Systems 22 (2): 253–277.Google Scholar
  12. Bligh, P. and Turk, D. (2004). CRM Unplugged: Releasing CRM's strategic value Hoboken, New Jersey, USA: John Wiley & Sons, Inc.Google Scholar
  13. Bohling, T., Bowman, D., LaValle, S., Mittal, V., Narayandas, D., Ramani, G. and Varadarajan, R. (2006). CRM Implementation: Effectiveness issues and insights, Journal of Services Research 9 (2): 184–194.CrossRefGoogle Scholar
  14. Boulding, W., Staelin, R., Ehret, M. and Johnston, W. (2005). A Customer Relationship Management Roadmap: What is known, potential pitfalls, and where to go, Journal of Marketing 69 (4): 155–166.CrossRefGoogle Scholar
  15. Brynjolfsson, E. and Hitt, L.M. (1996). Paradox Lost? Firm-level Evidence on the Returns to Information Systems Spending, Management Science 42 (4): 541–559.CrossRefGoogle Scholar
  16. Buttle, F. (2004). Customer Relationship Management: Concepts and tools, Oxford: Elsevier.Google Scholar
  17. Carr, N.G. (2003). IT Doesn’t Matter, Harvard Business Review 81 (5): 41.Google Scholar
  18. Carr, N.G. (2004). Does IT Matter?: Information technology and the corrosion of competitive advantage, Boston, USA: Harvard Business School Press.Google Scholar
  19. Carte, T.A. and Russell, C.J. (2003). In Pursuit of Moderation: Nice common errors and their solutions, MIS Quarterly 27 (3): 479–501.Google Scholar
  20. Chin, W.W. (1998). The Partial Least Squares Approach for Structural Equation Modelling, in G.A. Marcoulides (ed.) Modern Methods for Business Research, Mahwah, NJ: Lawrence Erlbaum Associates, pp. 295–336.Google Scholar
  21. Churchill, G.A. (1979). A Paradigm for Developing Better Measures of Marketing Constructs, Journal of Marketing Research 26 (2): 64–73.CrossRefGoogle Scholar
  22. Clemons, E.K. and Row, M.C. (1991). Sustaining IT Advantage: The Role of Structural Differences, MIS Quarterly 15 (3): 275–293.CrossRefGoogle Scholar
  23. Danaher, P.J., Conroy, D.M. and McColl-Kennedy, J.R. (2008). Who Wants a Relationship Anyway?: Conditions when consumers expect a relationship with their service provider, Journal of Service Research 11 (1): 43–52.CrossRefGoogle Scholar
  24. Davenport, T.H., Harris, J.G., Long, D.W.D. and Jacobson, A.L. (2001). Data to Knowledge to Results: Building and analytic capability, California Management Review 43 (2): 117–137.CrossRefGoogle Scholar
  25. Day, G.S. (2003). Creating a Superior Customer-relating Capability, MIT Sloan Management Review 44 (3): 77–82.Google Scholar
  26. Day, G.S. and Van den Bulte, C. (2002). Superiority in Customer Relationship Management: Consequences for competitive advantage and performance, Cambridge, MA: Marketing Science Institute.Google Scholar
  27. Dess, G.G. and Robinson, R.B. (1984). Measuring Organizational Performance: The case of the privately-held firm and conglomerate business unit, Strategic Management Journal (5): 265–273.Google Scholar
  28. Devaraj, S. and Kohli, R. (2003). Performance Impacts of Information Technology: Is actual usage the missing link? Management Science 49 (3): 273–290.CrossRefGoogle Scholar
  29. Dewan, S. and Min, C. (1997). The Substitution of Information Technology for Other Factors of Production: A firm level analysis, Management Science 43 (12): 1660–1675.CrossRefGoogle Scholar
  30. Dierickx, I. and Cool, K. (1989). Asset Stock Accumulation and Sustainability of Competitive Advantage, Management Science (35): 1504–1511.Google Scholar
  31. Dowling, G.R. (2002). Customer Relationship Management: In B2C markets, often less is more, California Management Review 44 (3): 87–103.CrossRefGoogle Scholar
  32. Faul, F., Erdfelder, E., Lang, A.-G. and Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences, Behavior Research Methods 39: 175–191.CrossRefGoogle Scholar
  33. Francalanci, C. and Morabito, V. (2008). IS Integration and Business Performance: The mediation effect of organizational absorptive capacity in SMEs, Journal of Information Technology 23 (4): 297–314.CrossRefGoogle Scholar
  34. Fornell, C. and Larcker, D.F. (1981). Evaluating Structural Equation Models with Unobservable Variables and Measurement Error, Journal of Marketing Research 18 (3): 39–50.CrossRefGoogle Scholar
  35. Fryxell, G.E. and Wang, J. (1994). The Fortune Corporate ‘Reputation Index’: Reputation for what? Journal of Management 20 (1): 1–14.CrossRefGoogle Scholar
  36. Grant, R.M. (1996). Toward a Knowledge-based Theory of the Firm, Strategic Management Journal 38 (5): 109–122.CrossRefGoogle Scholar
  37. Greenberg, P. (2001). CRM at the Speed of Light, Berkeley, CA: Osborne/McGraw-Hill.Google Scholar
  38. Helfat, C.E., Finkelstein, S., Mitchell, W., Peteraf, M.A., Singh, H., Teece, D.J. and Winter, S.G. (2007). Dynamic Capabilities, Oxford, UK: Blackwell Publishing.Google Scholar
  39. Homburg, C., Droll, M. and Totzek, D. (2008). Customer Prioritization: Does it pay off, and how should it be implemented? Journal of Marketing 72 (9): 110–130.CrossRefGoogle Scholar
  40. Iriana, R. and Buttle, F. (2006). Strategic, Operational, and Analytical Customer Relationship Management: Attributes and measures, Journal of Relationship Marketing 5 (4): 23–34.CrossRefGoogle Scholar
  41. Jarvis, C.B., MacKenzie, S.B. and Podsakoff, P.M. (2003). A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research, Journal of Consumer Research 30 (2): 199–218.CrossRefGoogle Scholar
  42. Kenny, D.A., Kashy, D.A. and Bolger, N. (1998). Data Analysis in Social Psychology, in D. Gilbert, S.T. Fiske and G. Lindzey (eds.) Handbook of Social Psychology, 4th edn, Vol 1, New York: McGraw-Hill, pp. 233–265.Google Scholar
  43. Klein, J.I. (1990). Feasibility Theory: A resource-munificence model of work motivation and behavior, The Academy of Management Review 15 (4): 646–665.Google Scholar
  44. Kohli, R. and Gover, V. (2008). Business Value of IT: An essay on expanding research directions to keep up with the times, Journal of the Association of Information Systems 9 (2): 23–39.Google Scholar
  45. Kumar, N., Stern, L.W. and Anderson, J.C. (1993). Conducting Interorganizational Research Using Key Informants, Academy of Management Journal 36 (6): 1633–1651.CrossRefGoogle Scholar
  46. Lado, A.A. and Wilson, M.C. (1994). Human Resource Systems and Sustained Competitive Advantage: A competency-based perspective, The Academy of Management Review 19 (4): 699–718.Google Scholar
  47. Leonard, D. (1998). Wellsprings of Knowledge: Building and sustaining the sources of innovation, Boston, MA: Harvard Business School Press.Google Scholar
  48. Maoz, M., Collins, K., Davies, J., Kolsky, E., Mertz, S.A., Kaila, I., Dunne, M., Thompson, E., Radcliffe, J., Alvarez, G. and Desisto, R.P. (2007). The Gartner CRM Vendor Guide, http://www.gartner.com/ (accessed 17 April 2007).
  49. Marchand, D.A., Kettinger, W.J. and Rollins, J.D. (2000). Information Orientation: People, technology and the bottom line, Sloan Management Review 41 (4): 69–84.Google Scholar
  50. Marcoulides, G.A. and Saunders, C. (2006). PLS: A silver bullet? Editor's comments, MIS Quarterly 30 (2): iii–ix.Google Scholar
  51. Marketing UK (2003). The Problem of CRM Under-delivery, Marketing UK, [www document] http://www.marketinguk.co.uk/ (accessed online 15 January 2004).
  52. Markus, M.L. and Robey, D. (1988). Information Technology and Organizational Change, Management Science 34 (5): 583–599.CrossRefGoogle Scholar
  53. Mata, F.J., Fuerst, W.L. and Barney, J.B. (1995). Information Technology and Sustainable Competitive Advantage: A resource based analysis, MIS Quarterly 19 (4): 487–505.CrossRefGoogle Scholar
  54. McKinnon, D.P., Krull, J.L. and Lockwood, C.M. (2000). Equivalence of the Mediation, Confounding, and Suppression Effect, Prevention Science 1: 173–181.CrossRefGoogle Scholar
  55. Melville, N., Kraemer, K. and Gurbaxani, V. (2004). Information Technology and Organizational Performance: An integrative model of IT business value, MIS Quarterly 28 (2): 283–322.Google Scholar
  56. Mendelson, H. and Pillai, R.R. (1998). Clockspeed and Information Response: Evidence from the Information Technology Industry, Information Systems Research 9 (4): 415–434.CrossRefGoogle Scholar
  57. Mishra, A.N., Konana, P. and Barua, A. (2007). Antecedents and Consequences of Internet Use in Procurement: An empirical investigation of U.S. manufacturing firms, Information Systems Research 18 (1): 103–123.CrossRefGoogle Scholar
  58. Mithas, S., Ramasubbu, N. and Sambamurthy, V. (forthcoming). How Information Management Capability Influences Firm Performance, MIS Quarterly, (in press).Google Scholar
  59. Mittal, V., Anderson, E.W., Sayrak, A. and Tadikamalla, P. (2005). Dual Emphasis and the Long-term Financial Impact of Customer Satisfaction, Marketing Science 24 (4): 544–559.CrossRefGoogle Scholar
  60. Newbert, S.L. (2007). Empirical Research on the Resource-based View of the Firm: An assessment and suggestions for future research, Strategic Management Journal 28 (2): 127–143.CrossRefGoogle Scholar
  61. Oh, W. and Pinsonneault, A. (2007). On the Assessment of the Strategic Value of Information Technologies: Conceptual and analytical approaches, MIS Quarterly 31 (2): 239–264.Google Scholar
  62. Payne, A. and Frow, P. (2005). A Strategic Framework for Customer Relationship Management, Journal of Marketing 69 (4): 167–191.CrossRefGoogle Scholar
  63. Piccoli, G. and Ives, B. (2005). IT-dependent Strategic Initiatives and Sustained Competitive Advantage: A review and synthesis of the literature, MIS Quarterly 29 (4): 747–777.Google Scholar
  64. Podasakoff, P.M., MacKenzie, S.B. and Lee, J.-Y. (2003). Common Method Biases in Behavioral Research: A critical review of the literature and recommended remedies, Journal of Applied Psychology 88 (5): 879–896.CrossRefGoogle Scholar
  65. Podsakoff, P. and Organ, D. (1986). Self Reports in Organizational Research: Problems and prospects, Journal of Management 12 (4): 531–544.CrossRefGoogle Scholar
  66. Powell, T. and Dent-Medcalfe, A. (1997). Information Technology as Competitive Advantage: The role of human, business, and technology resources, Strategic Management Journal 18 (5): 375–405.CrossRefGoogle Scholar
  67. Preacher, K.J. and Hayes, A.F. (2004). SPSS and SAS Procedures for Estimating Indirect Effects in Simple Mediation Models, Behavior Research Methods, Instruments & Computers 36 (4): 717–731.CrossRefGoogle Scholar
  68. Ray, G., Muhanna, W.A. and Barney, J.B. (2005). Information Technology and the Performance of the Customer Service Process: A resource-based analysis, MIS Quarterly 29 (4): 625–653.Google Scholar
  69. Reinartz, W., Krafft, M. and Hoyer, W.D. (2004). The Customer Relationship Management Process: Its measurement and impact on performance, Journal of Marketing Research 41 (3): 293–313.CrossRefGoogle Scholar
  70. Ringle, C., Wende, S. and Will, A. (2005). SmartPLS 2.0 (beta), [www document] http://www.smartpls.de.
  71. Ross, J.W. and Beath, C.M. (2002). Beyond the Business Case: New approaches to IT investment, MIT Sloan Management Review 43 (2): 51–59.Google Scholar
  72. Rust, R., Moorman, C. and Dickson, P.R. (2002). Getting Return on Quality: Revenue expansion, cost reduction, or both? Journal of Marketing 66 (10): 7–24.CrossRefGoogle Scholar
  73. Ryals, L. (2005). Making Customer Relationship Management Work: The measurement and profitable management of customer relationships, Journal of Marketing 69 (4): 252–272.CrossRefGoogle Scholar
  74. Santhanam, R. and Hartono, E. (2003). Issues in Linking IT Capability to Firm Performance, MIS Quarterly 27 (1): 125–153.Google Scholar
  75. Seddon, P.B. (1997). A Respecification and Extension of DeLone and McLean Model of IS Success, Information Systems Research 8 (3): 240–253.CrossRefGoogle Scholar
  76. Sharma, R., Yetton, P. and Crawford, J. (2009). Estimating the Effect of Common Method Variance: The Method-Method Pair Technique with an Illustration from TAM Research, MIS Quarterly 33 (3): 473–499.Google Scholar
  77. Sutton, D. and Klein, T. (2003). Enterprise Marketing Management, New Jersey: John Wiley & Sons, Inc.Google Scholar
  78. Swanson, E.B. and Ramiller, N.C. (1997). The Organizing Vision in Information Systems Innovation, Organization Science 8 (5): 458–474.CrossRefGoogle Scholar
  79. Tippins, M.J. and Sohi, R.S. (2003). IT Competency and Firm Performance: Is organizational learning a missing link? Strategic Management Journal 24: 745–761.CrossRefGoogle Scholar
  80. Van Bruggen, G.H., Lilien, G.L. and Kacker, M. (2002). Informants in Organizational Marketing Research: Why use multiple informants and how to aggregate responses, Journal of Marketing Research 39 (4): 469–478.CrossRefGoogle Scholar
  81. Wade, M. and Hulland, J. (2004). The RBV and IS Research: Review, extension and suggestions for future research, MIS Quarterly 28 (1): 107–142.Google Scholar
  82. Weill, P. (1992). The Relationship between Investment in Information Technology and Firm Performance: A study of the valve manufacturing sector, Information Systems Research 3 (4): 301–331.CrossRefGoogle Scholar
  83. Weill, P. and Aral, S. (2006). Generating Premium Returns on Your IT Investments, MIT Sloan Management Review 47 (2): 39–48.Google Scholar
  84. Weill, P. and Ross, J. (2005). A Matrixed Approach to Designing IT Governance, MIT Sloan Management Review 46 (2): 26.Google Scholar
  85. Weill, P. and Vitale, M. (2002). What IT Infrastructure Capabilities are Needed to Implement e-Business Models? MIS Quarterly 1 (1): 17–35.Google Scholar
  86. Wetzels, M., Odekerken-Schroder, G. and van Oppen, C. (2009). Using PLS Path Modelling for Assessing Hierarchical Construct Models: Guidelines and empirical illustration, MIS Quarterly 33 (1): 177–195.Google Scholar
  87. Woszczynski, A.B. and Whitman, M.E. (2004). The Problem of Common Method Variance in IS Research, in A.B. Woszczynski and M.E. Whitman (eds.) The Handbook of Information Systems Research, Idea Publishing Group: Hershey, PA, pp. 66–77.CrossRefGoogle Scholar
  88. Zhao, X., Lynch Jr., J.G. and Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis, Journal Consumer Research 37: 197–206.CrossRefGoogle Scholar
  89. Zuboff, S. (1988). The Panopticon and the Social Text, in Zuboff, S. (ed.) In the Age of the Smart Machine, New York: Basic Books.Google Scholar

Copyright information

© Association for Information Technology Trust 2011

Authors and Affiliations

  • Tim Coltman
    • 1
  • Timothy M Devinney
    • 2
  • David F Midgley
    • 3
  1. 1.University of WollongongWollongongAustralia
  2. 2.Faculty of Business, University of Technology – SydneySydneyAustralia
  3. 3.INSEADFontainebleauFrance

Personalised recommendations