Introduction and problem formulation

Managers and business model developer teams are increasingly confronted with the tasks of innovating and adapting business models to turbulent, novel, and changing situations (Chesbrough, 2007; El-Sawy et al., 2010), a fact that is accelerated by various drivers, including technological developments (Amit & Zott, 2012; Remane et al., 2017) and the need to face grand social and environmental challenges (Schaltegger et al., 2016). The task of innovating a business model is complex as numerous different and sometimes conflicting design decisions have to be made. While business model innovation requires creative and novel ideas, prior research also emphasized the importance of providing structure and guidance to frame and focus thought (Eppler et al., 2011; Täuscher & Abdelkafi, 2017). Following this, the business model literature has been concerned with supporting artifacts, including overviews of distinct business model components (Al-Debei & Avison, 2010), the role of modeling languages (John et al., 2017), and the aiding force of classification tools (Möller et al., 2021).

As the diversity of real business models and corresponding business model choices has risen vastly (Pateli & Giaglis, 2004) and 90% of the succeeding models are recombinations of existing elements (Gassmann et al., 2014), our review of the literature disclosed that especially classification tools gain in popularity in both academia and practice. This is indicated, for instance, by a booming availability and a rich cumulative body of published classification toolsFootnote 1 (e.g., Baden-Fuller & Morgan, 2010; Möller et al., 2021). This interest spans various fields, such as strategic management (Vares et al., 2022), organizational science (Lambert, 2015), information systems (IS) (Weber et al., 2022), and entrepreneurship (Gimpel et al., 2018). Corresponding tools help to organize knowledge about business models, their characteristics (i.e., attributes of real firms, Massa et al., 2017), and possible configurations (Gassmann et al., 2014; Lambert, 2015). Thereby, they meet the demand of managers and business model developers to handle the growing solution space of possible design choices (i.e., business model characteristics) as well as assist companies to adapt their businesses.

Despite the increasing interest, there is, to the best of our knowledge, only scarce guidance on what type of classification tool is suitable for a specific purpose within the context of business models. However, making informed decisions on how, what, and why a type of classification is developed and used is of great importance. For instance, from a theoretical viewpoint, different types of classification tools have different underpinning claims (e.g., usefulness vs. truth, Kundisch et al., 2021), build upon different sources (e.g., empirical vs. conceptual grounding, Bailey, 1994), capture different objects (e.g., ideal types in typologies, Doty & Glick, 1994), and present different relationships (e.g., causal relationships in typologies, Bonazzi & Liu, 2015). Missing understanding of those differences hinders developers from building the possible most effective type of classification tools and poses challenges for users (e.g., managers) in selecting the best possible tool to achieve an intended goal. This observation is also supported by scholars stressing that many classifications are proposed “with little or no justification or explanation” (Lambert, 2015, p 50). As a result, there is a demand for understanding a tool’s underlying design decisions to ensure scientific rigor (Lambert, 2015) and reveal their potential for practical application (Remane et al., 2017). To account for the aforementioned challenges, we disclose the frequency across certain tool types (e.g., distribution of taxonomies and typologies), differences and similarities between the tools, raise awareness of the multiplicity of tools and their underpinning assumptions, and elaborate on future directions. In doing this, we make the first steps to a more reflected building of tools and lay the foundation for advanced design guidelines. Therefore, we formulated the following two research questions: What is the distribution of classification tools for business models (RQ1)? What are the key characteristics in the design and use of different classification tool types for business models (RQ2)?

In attempting to answer these questions, we begin by clarifying fundamental concepts, underpinnings, and assumptions of common classification types as well as their role in business model research (“Research background”). Based on a descriptive literature review (“Research method”), we then describe the current landscape of classification tools for business models (“Extraction of classification tools for business models”). Afterward, we present the results of the analysis to disclose differences and similarities among those tools (“Analysis of classification tools for business models”). By building upon the results, we reflect on observations to advance guidance on the design of classification tools, discuss implications for research and practice, and elaborate on aspects that require additional research (“Discussion”). Finally, we conclude with our paper (“Conclusion”).

With our work, we make important contributions. First, we provide a status quo overview of taxonomies, typologies, classification schemes, and other classification tools in the business model domain. This overview helps to get orientation concerning what is already out there and can serve as a unit for reusing and advancing those artifacts (vom Brocke et al., 2020). Second, our work organizes design options for each classification tool type as well as initial production patterns, which help designers in making purposeful decisions during the building and evaluation of new tools, ultimately paving the ground for prescriptive knowledge on business model classification design. Third, we raise awareness for the plurality of classification tools and corresponding differences in their design.

Research background

Business model development

Business model research is characterized by manifoldness. It is covered by many disciplines, such as IS, strategic management, and entrepreneurship (Schneider & Spieth, 2013; Wirtz et al., 2016; Zott et al., 2011). The business model concept can be broadly defined as an abstract description of how companies create, deliver, and capture value (Teece, 2010). A good business model gives answers to the following questions: Who is the customer? What does the customer value? How do we make money? What is the underlying economic logic that explains how to deliver value? (Magretta, 2002, p. 4). The business model concept is widely understood as a (management) tool to facilitate different activities (Pateli & Giaglis, 2004), including design, analysis, understanding, and evaluation of the core business logic (Veit et al., 2014), overview of the key components (Osterwalder, 2004), and assessment of new ideas (Weill & Vitale, 2001). Massa et al. (2017) identified three main streams of meanings when using the term business model. Attributes of real companies in which business models are seen as elements determined “by empirically classifying real world manifestations of organizations as a function of their measured similarity on observed variables” (p. 76). Cognitive and linguistic schemes presume that managers do not hold systems in their minds when they make decisions, but images of such systems are shaped by their own cognitive frames. Formal conceptual representations aiming to articulate a model through pictorial, mathematical, or symbolic visualizations (Massa et al., 2017).

Given the broadness of business model research, we can observe numerous subareas (e.g., Kamoun, 2008). Among others, these areas include business model components to decompose businesses into fundamental constructs (Osterwalder & Pigneur, 2010), business model development tools to visualize, automate, and leverage the process of designing a business model with software (Szopinski et al., 2020a, b), modeling languages to represent elements and relationships (John et al., 2017), and business model taxonomies to categorize businesses. Particularly, the latter area of taxonomies and other categorization approaches is gaining popularity in recent years, as indicated by an increasing number of publications (e.g., Möller et al., 2021). The relevance has already been stressed by Pateli and Giaglis (2004), who found in their review of e-business models that “a great deal of research has been devoted towards developing typologies of business models by classifying them under a set of criteria” (p. 308). Business model designers need to have a comprehensive understanding of the design choices (Casadesus-Masanell & Zhu, 2010), which are characterized by continuous changes because of, for instance, shifts towards more digitalized businesses (Alt, 2020), the increasing integration of emerging technologies (Weber et al., 2022), the embedment in (dynamic) platform ecosystems (Hein et al., 2020), and external influences (e.g., worldwide pandemic, Schaffer et al. 2021). A structured overview of those choices supports different stages of business model development (e.g., Heikkilä et al., 2016), such as the ideation phase to get impulses for relevant elements and the design phase to get an ontology-based orientation of the main components of a business model. Also, classifications are relevant for the evaluation to, for example, compare and benchmark new business ideas with existing ones, identify alternative choices, and organize decisions (e.g., Schoormann et al., 2018). Since classification tools help to organize both empirical and conceptual knowledge for the development and adoption of business models, they are promising to face today’s situations coined by complexity, inconsistency, and growing design options. Given that potential, this paper sheds light on classification tools in particular.

Classification in business model research

Since one of the earliest classification schemes by Carl Linnaeus, who published 1735 a comprehensive classification of animals and plants, ordering objects has become a fundamental form of science across disciplinary boundaries (Carper & Snizek, 1980; Eickhoff et al., 2017), according to Lakhoff (1987, p. 5), “there is nothing more basic than categorization to our thought, perception, action, and speech.” If one does not have this ability, one would perceive each entity as unique and would be overwhelmed by the diversity of things (Smith & Medin, 1981). With the ability to organize knowledge, those tools help to understand and analyze complex objects as well as hypothesize about object relationships (Wand et al., 1995).

A range of terms is used in business model research to describe schema and mechanisms to classify objects, taxonomies and typologies are among the most frequent ones (Lambert, 2015). While these terms tend to be used interchangeably (Kamprath & Halecker, 2012; Lambert, 2015), there are however distinguishing aspects (e.g., Baden-Fuller & Morgan, 2010; Bailey, 1994; Doty & Glick, 1994) that need to be taken into account during their design and/or application. Next, selected differentiating aspects are discussed.

When employing the term typology, some scholars refer to a deductively derived classification (i.e., top-down), a so-called conceptual classification (Baden-Fuller & Morgan, 2010). For example, as stressed by Bailey (1994, p. 4), a “typology is generally multidimensional and conceptual [and thus] the cells of a typology represent type concepts rather than empirical cases.” Thereby, typologies focus on so-called “ideal profiles” that are theoretical abstractions seeking to capture holistic configurations of multiple constructs. These types represent a “unique combination of the organizational attributes that are believed to determine the relevant outcome(s)” (Doty & Glick, 1994, p. 232). In doing this, typologies help to reduce the information of complex ideal types or real-world things to those aspects of theoretical significance or an observer’s interest (Reinhold et al., 2018). Adapting these assumptions to this study’s context, business model typologies should contain ideal business model elements grounded in conceptual and theoretical inputs.

In contrast, taxonomies are usually derived with an inductive approach (i.e., bottom-up), for which reason they present an empirical classification (Sokal, 1963). Taxonomies “categorize phenomena into mutually exclusive and exhaustive sets with a series of discrete decision rules” (Doty & Glick, 1994, p. 232). This type of classification can be matched to Massa et al.’s (2017) interpretation of business models as “attributes of real firms,” which are examined empirically by classifying real-world organizations. Unlike typologies, a taxonomy usually aims to classify real instances according to their measured similarity to observed characteristics (Bailey, 1994). Business model research about taxonomies is concerned with exploring possible categorizations of business models based on their unique features. Given that “a relatively significant portion of work […] has been performed in this field” (Pateli & Giaglis, 2004, p. 306), the importance of taxonomies is recognized in business model literature. As a result, various taxonomies exist, including domains as diverse as Fintech, manufacturing, music, healthcare, and energy (see an overview of empirical taxonomies in Möller et al., 2021).

Despite the widely accepted differentiation between conceptual and empirical classification tools, Nickerson et al. (2013) combined both building approaches. The authors argued that taxonomies can be grounded in conceptual and empirical work simultaneously. Nonetheless, this paper sets out to explore how different tool types are created and whether there are (intended) distinctions to be considered.

Research method

This paper reports on a descriptive literature review to examine a corpus of studies in a certain field and reveal patterns and trends (Paré et al. 2015). Because ensuring rigor in literature reviews is important to enabling other researchers to build on the review’s findings, we have adapted guidelines for qualitatively analyzing literature from Bandara et al. (2015), including the four phases for extraction, screening, coding, and presentation, and enriched this with elements from Templier and Pare (2018). Thereby, we seek to perform two knowledge-building activities (Schryen et al., 2020), namely, synthesizing to organize published knowledge and identifying research gaps to reflect on a possible mismatch of required and available research to derive potential directions.

Phase 1: extraction of literature (classification tools)

To assemble a corpus of papers proposing a classification tool in the business model field, we employed a threefold search approach to identify a broad set of tools. First, we used Scopus (restricted to areas for “COMP” and “BUSI,” type “articles,” and language “English”) to collect a wide range of literature regardless of disciplinary boundaries. In accordance with the prevailing forms of classification tools (see “Research background”), we selected the following search items (title and abstract): “business model typology,” “business model taxonomy,” and “business model classification.” Second, being aware that taxonomies have gained popularity, especially from the IS community, we used AISeL to search for “business model” in combination with “taxonomy,” “typology,” and “classification” in abstracts and titles. Third, to incorporate more technical and engineering-based literature, we also screened the aforementioned keywords in Science Direct.

Phase 2: screening and selection of literature (classification tools)

We performed an initial search at the end of 2021 to get familiar with the topic and examine the feasibility of our research design. The final search for Scopus and AISeL has been updated on 05/2022. Science Direct was added on 09/2022. As a result, we crafted a corpus of 481 hits; 291 hits were obtained from Scopus, 96 hits from AISeL, and 97 from Science Direct. Following Bandara et al. (2015), a title and abstract-based screening was performed. The remaining papers were analyzed based on their full texts. We excluded the following papers: duplicates (e.g., papers appearing both in AISeL and the Top Basket) and papers obtaining classifications tools that do not directly refer to business models (e.g., taxonomies of business model development tools, Szopinski et al., 2020a, b). We included papers that present a classification tool regardless of its form for business models regardless of domain and technology. After applying the inclusion and exclusion criteria, we arrived at a corpus of 90 papers. Of these 90 papers, 32 papers were collected from Scopus, 49 from AISeL, and 9 from Science Direct (see Appendix 1 for review details).

The distribution of tool types observed in Fig. 1 is comparable with the increasing interest in taxonomies in general. For instance, Oberländer et al. (2019) identified in their systematic assessment that while the number of classification papers has grown across the last years, particularly taxonomies are of booming relevance for the IS community. Among other reasons, this can be attributed to the widely accepted method for taxonomy building from Nickerson et al. (2013), which is nowadays recognized as the “de-facto standard” (Kundisch et al., 2021) as well as combines conceptual and empirical classifications under the umbrella term of taxonomies.

Fig. 1
figure 1

Chronological distribution of business model classification tools

Phase 3: coding and analysis of literature (analytical grid)

In descriptive reviews, characteristics and relations of interest are extracted from a sample of papers (Paré et al. 2015). To do this, the collected literature corpus needs to be managed and analyzed, for which reason a coding schema was created (Bandara et al., 2015) and applied to compare classification tools.

Building the analytical grid

To create a coding schema capable of supporting the analysis of the general components addressed within a classification tool, we adapted a taxonomic-driven research method informed by Kundisch et al. (2021). In doing this, we were able to extract key characteristics for each of the different tool types and create a basis for exploring differences and similarities. Next, we summarize the main steps of designing our taxonomy; see Appendix 2 for detailed descriptions.

In Iteration 1, we started with a conceptual approach in which we draw on prior literature providing important elements to be considered for classification tools. Based on the literature, we specified the following initial taxonomy dimensions: empirical- and conceptual-informed research approach (e.g., Bailey, 1994), grounding and development (e.g., Nickerson et al., 2013), demonstration and evaluation (e.g., Szopinski et al., 2019), communication and visualization (e.g., Szopinski et al., 2020a,b), application and use (e.g., Schoormann et al., 2022), and general aspects for purpose and scope of a tool.

In subsequent steps, we made use of the identified sample of classification tools (see Phase 1) and inductively refined the conceptual foundation. Four empirical iterations were performed. In Iteration 2, we randomly selected five papers for each of the main types of tools, namely, taxonomies, typologies, and classification schemes. The 15 papers were analyzed with the conceptual lens derived in the prior iteration as well as in an explorative manner to identify new and refine existing dimensions and characteristics. Following this, we extracted additional characteristics, such as new grounding inputs (e.g., public data from newspapers), development approaches (e.g., reuse and extend available artifacts), and evaluation techniques (e.g., statistical validation and theoretical saturation). In Iteration 3, we replicated the strategy from the previous iteration and again selected five papers for each of the different types. Thereby, we made several extensions, including additional grounding inputs (e.g., survey data), evaluation techniques (e.g., logical arguments), and use purposes (e.g., derive trends and future research). Also, we refined the visualization dimension by splitting matrix representations into two-dimensional and three-dimensional matrices. In Iteration 4, as the taxonomy began to be more robust, we selected a larger sample of papers. We classified an additional set of 30 randomly selected papers, regardless of which type of classification tool was presented. During the analysis, we could only extract two extensions, namely, textual explanations as visualization form and using classifications tools as part of the building of larger artifacts (e.g., holistic frameworks). Lastly, in Iteration 5, the remaining papers of our sample were classified. While the first rounds were performed by one author, this coding was done by other members of the author team and validated afterward to ensure that different users can use the taxonomy. During this iteration, no new dimensions and characteristics were added, pointing to the taxonomy’s saturation. However, through continuous discussions, we refined some terms and restructured the order of dimensions. In line with the refinements, we grouped the dimensions into meta-dimensions to provide additional structure.

Applying the analytical grid (comparison of classification tools)

To assure the generalizability of results, descriptive literature reviews also “codify and analyze numeric data that reflect the frequency of the topics, authors, or methods in the extant literature” (Paré et al. 2015, p. 186). Common methods in these reviews are content analysis and frequency analysis to produce some quantitative outcomes. Following this, we applied the analytical grid (i.e., taxonomy) from the previous phase to extract and compare the distribution of characteristics across all classification tool types. In the first step, the frequencies of fulfilled characteristics were counted to disclose how often a certain element is fulfilled by a type. Afterward, the frequencies from each tool type were compared to each other in order to disclose differences between the types and point out their shaping and constituting features.

Phase 4: Presentation of results

As a result, we present the status quo of classifications tools extracted from the literature corpus to visually represent the current landscape of research (see “Extraction of classification tools for business models”) as well as the coding scheme in the form of an analytical grid to derive and compare the key characteristics of those tools (see “Analysis of classification tools for business models”). In doing this, we were able to reflect on main observations to synthesize a set of lessons learned for the design of business model classification tools (e.g., production patterns) as well as on (theoretical) gaps within this field to provide possible directions for future research (see “Discussion”).

Extraction of classification tools for business models

By drawing on the sample of published papers from the identification phase, we aim to uncover the diversity of how classifications are developed, communicated, and used. The following subsection presents an overview of the tools along with the four major types of taxonomies, typologies, classification schemes, and other classification tools (see Appendix 3 for the entire list of papers; see Appendix 4 for representations that stood out). For transparency, we reference example papers (#ID) in our explanations.

Business model taxonomies

With 46/90 papers (~ 51%), the bulk of our sample proposed classification tools in the form of taxonomies (see Fig. 2). Typically, taxonomies focus on entire business models (42/46) instead of specific business model components (4/46), such as key resources or key partners. They address numerous different domains, such as Fintech (#17), data marketplaces (#21), textile industry (#26), and digital music services (#34), as well as technologies, including the Internet of Things (#25) and Blockchain business models (#7).

Fig. 2
figure 2

Overview of taxonomies (note: partly adapted for readability)

Most of the business model taxonomies are derived by combining empirical and conceptual approaches (30/46); only 10/46 are purely empirical and 6/46 are conceptual. Researchers tend to make use of scientific literature (39/46) and a set of real-world objects (22/46), such as startup businesses (#45), available business models from CrunchBase (#38), or available IoT platforms (#25). In line with this, common development procedures are literature reviews and analyses (30/46) as well as qualitative content analysis and coding to extract typical business model features from data about real-world objects (27/46). More than half of the papers seek to indicate the applicability of their taxonomy by providing illustrative cases and demonstrations (29/46). To additionally indicate the taxonomy’s usefulness, some researchers also performed expert interviews and workshops (6/46). For presentation, 28/46 papers use morphological boxes organized along with meta-dimensions or so-called layers (30/46), dimensions (40/46), and characteristics (37/46). Our sample contains taxonomies with more than 20 dimensions (#70, #71) as well as more than 90 individual characteristics (#70). In addition to the actual taxonomy, researchers often apply their results to arrive at business model archetypes (i.e., typical configurations of the characteristics) (19/46) as well as derive future trends and potential for subsequent research (15/46).

Business model typologies

With 28/90 papers (~ 31%), typologies are the second most frequently used classification tool in our sample (see Fig. 3, top). Most typologies focus on entire business models (22/28) instead of specific business model components (6/28). Typologies address several domains, such as sustainability (#47), tourism industry (#55), and air navigation (#12). In terms of technologies, we found those tools, for instance, dealing with distributed ledger (#58), wireless network business models (#2), mobile applications (#16), and Blockchain (#13).

Fig. 3
figure 3

Overview of typologies, classification schemes, and other tools (note: partly adapted for readability)

While prior literature emphasizes the conceptual grounding of typologies (Bailey, 1994; Doty & Glick, 1994), our sample contains a rather balanced grounding. With 13/28, the majority is conceptually deduced, 11/28 are derived from empirical data, and 4/28 used combinations of both approaches. Following the conceptual foundation, we found development methods, including theorizing (#39) or typological reasoning (#55). Typically, scientific literature and theoretical knowledge are applied for development. For presenting the resulting typology, researchers mostly use two- (8/28) and three-dimensional (6/28) matrices, tables (11/28), and graphical Figs. (6/28). Those representations typically comprise few dimensions and few characteristics. All of the typologies report on archetypes or clusters of corresponding business models (28/28).

Business model classification schemes

12/90 papers (~ 13%) present general classification schemes for business models (see Fig. 3, central). While most of them shed light on entire business models (7/12), contrarily to other types, several classification schemes also lay a focus on specific business model components (5/12), such as partners (#37) or outputs (#60). Those schemes are applied in several domains, such as banking (#54), sharing economy (#27), and software firms (#38). Thereby, different technologies are in focus, for example, the Internet of Things (#18) and RFID (#28).

Classification schemes are mostly developed conceptually (6/12) by drawing on data from scientific literature (7/12) and theoretical knowledge (2/12). Our sample contains a diverse set of development techniques, such as design science research (#54), conceptual modeling (#60), and a specific classification methodology as proposed by Fettke & Loos (#22). For visualizing the dimensions (11/12) and characteristics (712) for the classification, researchers use hierarchical structures (5/12) and graphical representations in the form of Figs. (3/12). The majority of papers do not report on types or clusters of business models (8/12).

Other business model classification tools

Classification tools that cannot be positioned as one of the aforementioned types are summarized as “other.” Our sample contains 4/90 papers (~ 4%) within this category (see Fig. 3, bottom). Other types comprise papers that propose either a general framework (3/4) or “cluster profiles” (1/4). These are built for domains, such as software firms (#33) and electronic markets (#15), as well as for technologies, including cloud-based businesses (#69). Given that this type of tool is very small, it is hard to disclose some unique features. Nonetheless, we observe that they typically mixed conceptual and empirical approaches (2/4), relied on scientific literature (3/4), followed clustering techniques (2/4), and are represented in the form of tables (2/4) or hierarchical structures (2/4).

Analysis of classification tools for business models

Analytical grid for the tool analysis

Based on the status quo of business model classification tools, we analyzed differences through a taxonomic approach. After six iterations (i.e., one conceptual and five empirical iterations), we arrived at our comprehensive taxonomy for classification tools (see Fig. 4). For the sake of structure, we clustered the taxonomy dimensions along with four meta-dimensions. Each of these meta-dimensions captures three to five dimensions, for example: “design and development” (D) differentiates between the “research approach” (D1), the grounding data (D2), and the building approach (D3); please note that the dimension IDs indicate the belonging to a meta-dimension (i.e., G = general; D = design; E = evaluation; C = communication).

Fig. 4
figure 4

Analytical grid (represented as taxonomy). Note: * represents mutually exclusive dimensions (i.e., one characteristic per object)

In the following subsections, we first explain the grid, which serves as a unit to compare the classification tools based on their characteristics and then illustrate differences. To strengthen the inductive procedure, paper IDs (#) are presented for each characteristic (see Appendix 3 for the entire list of tools and IDs).

Meta-dimension: General characteristics

The first meta-dimension contains five dimensions, including the type of tool, primary goal, business model scope, domain focus, and technology focus. As the most basic aspects, the tool type (G1) differentiates between taxonomy, typology, classification scheme, and other types to also incorporate rarely used tools (e.g., some authors refer to frameworks (#69) or cluster profiles (#31)). In terms of the purpose of why a tool is created and/or can be employed, primary goals (G2) could be extracted through the empirical analysis. Being aware of the fact that the purpose is somehow interwoven with the entire paper, we sought to specify a set of common goals. Besides classification (#35), our sample emphasized goals for the identification of key characteristics (#41) and differentiating characteristics of business models (#39), the specification of clusters/types (#66), and the assessment and analysis (#51) of business models. Also, researchers aim to contribute to the understanding and unified definition of business models (#44) as well as to the exploration of novel configurations (#57) to strive for innovation. During the analysis, we found papers with varying scopes. While, for instance, some authors explore abstract classes of business models (e.g., mobile business models, #37), others only address single business model elements like key business resources (#62), which is specified as business model scope (G3) within our taxonomy. As another aspect, the classification tools differ in what they analyze, including specific technologies (e.g., taxonomy for artificial intelligence-based business models, #97) or certain industries (e.g., a typology for tourism, #39). Consequently, we make a distinction between whether a tool has a domain focus (G4), such as on Fintech, and a technology focus (G5), like Blockchain businesses, or not.

Meta-dimension: Design and development

As one of the main activities, the second meta-dimension refers to the actual design of a classification tool. In accordance with prior literature, the research approach (D1) indicates whether the tool is empirically (i.e., usually referred to as taxonomy) and/or conceptually (i.e., usually referred to as typology, Bailey, 1994) derived. Depending on the approach, one can rely on a different grounding (data) (D2), such as qualitative data from real-world objects (e.g., 125 logistic startups #43), expert interviews (e.g., semistructured interviews with managers #73), and document analysis (e.g., public achieves, news articles, and press releases #13) as well as on quantitative data, such as from larger surveys (#33). In addition, our analysis disclosed that authors employ knowledge from scientific literature, mostly in combination with systematic reviews, as well as draw on theoretical and conceptual work, such as in the form of theories from e-business models and value networks (#28). Building upon the input data, the dimension building approach (D3) represents techniques and activities for building the tool. Typically, more than one approach is followed. Among the most frequently mentioned ones are clustering (#33), case study (#51), systematic literature review and analysis (#36), and content analysis and coding (#72). Moreover, we found papers reusing and refining available tools, for instance, “empirically revisit existing product-service system business model typologies” (#1). With regard to typologies, our sample discloses a great heterogeneity of building approaches employed only once or a few times, captured by “other approaches” (e.g., configurational comparative method #44 or typological reasoning #55).

Meta-dimension: Evaluation and application

After the classification tool is designed—depending on the underlying philosophical stance (e.g., Kundisch et al., 2021)—it requires evaluation to highlight, for instance, its applicability, usefulness, or truth value. Within this meta-dimension, we distinguish between three dimensions covering the evaluation of the design process (i.e., how the tool is designed), the evaluation of the product (i.e., the outcome), and the use of the tool in subsequent steps. Referring to the design process, our sample entails papers performing statistical validation, such as robustness of cluster (#12), and method-oriented validations, including theoretical saturation during coding processes (#16) or reliability of coders. These are specified under the dimension process (building) (E1). Concerning the actual product, product (tool) (E2) evaluation comprises several techniques, including illustrative cases (#37), expert panels and interviews (#34), and logical arguments (#9), to elaborate why a tool works or is of value. In addition to evaluation activities, numerous scholars apply or observe how external users apply a tool. For example, classification tools are employed to derive business model archetypes (#46) or to classify concrete real-world business models within the number of generic types (#37). Besides, scholars draw on classification tools to reflect on the advantages of business model types (#2), discuss gaps and trends within a certain domain of interest (#36), derive testable propositions (#9), and develop further artifacts (e.g., create a holistic framework, #37). These usages are summarized as use scenarios (E3).

Meta-dimension: Communication

Finally, prior research has already tackled questions about how to visualize a taxonomy (Szopinski et al. 2020a,b). When it comes down to presenting the results and a classification tool, it is important to consider its visualization and form of communication. In this meta-dimension, we identified two main aspects. First, visualization (C1) shows how the classification tool is presented. Our analysis revealed numerous ways of presentation, including two-dimensional matrices, three-dimensional matrices (e.g., cubic typologies #5), morphological boxes, and rather regular tables. Also, papers use hierarchical structures and graphical visualizations in the form of figures. Only very few in our sample refrained from that and provided exclusively a textual description (#24). To indicate which components are covered by a representation, elements reported (C2) capture whether a tool contains meta-dimensions (also called layer or perspective), dimensions, and single characteristics. Since these are more common in tools focusing on classification criteria than entire classification types, we also added an element for “n/a.” Lastly, even though we assume that typologies tend to focus on (arche-)types, we found many papers presenting those types regardless of the tool’s actual category. In consequence, we added the dimension generic types reported (C3) to capture this information.

Application of the analytical grid

Next, we apply the analytical grid to our sample of classification tools to (a) demonstrate the grid’s applicability and (b) present the distribution of characteristics across the main tool types to disclose similarities and differences (see Fig. 5). For indicating the distribution, we count the number of fulfilled characteristics for the overall sample and individually for each tool type (i.e., taxonomy, typology, and classification schema). We neglect to present the “other tools” (4/90) because this is only a subsample of four papers.

Fig. 5
figure 5

Using the analytical grid of business model classification tools

Discussion

Facing a great interest and a growing body of classification tools within the business model domain, we have identified a corpus of classification tools for business models (see RQ1) and have started to examine differences and similarities utilizing an analytical grid in the form of a taxonomy (see RQ2). We shed light on the plurality of tools and how they are developed, evaluated, and communicated. Next, we aim to reflect on the insights obtained from the extraction and analysis of tools to elaborate on the current state, provide guidance and recommendations for building new tools, and derive avenues for future research.

Reflections and lessons learned

Synthesis of the current business model classification tools

Following prior research, the industry/domain and technology play an essential role in business model classifications (e.g., Möller et al., 2021; Weber et al., 2022). The fact whether a tool is specific or agnostic for a domain or technology particularly affects its (re-)use. While, for example, more generic tools allow for contextualization to a domain (e.g., Remane et al., 2017), a set of technology-specific tools allows for generalization to a more abstract class of technology. A synthesis of this fosters knowledge accumulation and cross-innovation as well as opens a larger solution space for the recombination of existing characteristics.

By analyzing the distribution of classifications across technologies and domains, some focal points and trends could be disclosed (see Table 1). Following recent shifts in the business model area towards electronic markets and more digital design choices (e.g., Alt, 2020), our sample also indicates a trend towards more data-driven and technology-enabled business models. While in early 2000 some papers started to lay a focus on the use of the Internet (#60) and other electronic applications (e.g., e-commerce #3) as well as on the software industry (#24, #34), scholars proposed classification tools for digital business models starting from 2017. Examples include digital business models for logistics (#43), tourism (#39), retail (#10), and Fintech (#17). Moreover, we see an increasing interest in especially data-driven businesses since 2020 with papers on, for instance, data-driven logistic services (#42) and data marketplaces (#21), all of them represented as taxonomies.

Table 1 Synthesis of domains and technologies

In accordance with the increased digitalization, another trend discussed in recent research deals with the integration of (novel) technologies at the core of a business model (e.g., Weber et al., 2022). 31 out of 90 classification tools in our sample focus on a certain technology, 21 are published since 2017. From the recently published subsample of 21 tools, 15 are represented via taxonomies (e.g., artificial intelligence #91, Blockchain #71) and only four as typologies (e.g., distributed ledge #58).

In contrast to the booming relevance of platform-based business models and ecosystems (e.g., Hein et al., 2020) in general business model literature, we only found three classification tools within this context which were published between 2015 and 2018. Two are presented as typologies (open data platform #9, multisided platform #64) and one as a taxonomy (platform-based marketplaces #66).

Regarding the actual domain of interest, we can observe a rather balanced distribution of the classification tools. Although there are slight accumulations concerning the mobility, energy, tourism, and banking domain, a rather wide range of areas are addressed by tools analyzed in our sample.

Observations from comparing business model classification tools

In line with our research question to disclose differences between the tools, we compare the distribution of the tool’s characteristics through the analytical grid (see Fig. 5). To ensure a more balanced comparison, we take into account the percentage frequency of each group of the tool types (i.e., 46 taxonomy papers equate to 100%, 28 typology papers equate to 100%, and 12 classification scheme papers equate to 100%). By comparing the frequency of characteristics, we especially reflect on elements with more than 20% variation to reveal the main differences between different tool types. Thereby, the following major observations have emerged:

  • Primary goal. Referring to the actual goal to be achieved, our sample is quite balanced across the classification tool types. We can, however, observe that taxonomies are more frequently concerned with the identification of key characteristics (28%) and contribution to a more unified understanding (48%), and typologies seek to create a small set of (ideal) types (39%) and thus support researchers in elaborating on differentiating properties (29%). This is in line with general taxonomy research which is also mostly concerned with the identification of dimensions and characteristics (Schoormann et al., 2022) and reviews that organize specific concepts from existing research (Schwarz et al., 2007).

  • Business model scope. While taxonomies (91%) and typologies (79%) mostly cover entire business models, about 42% of the classification schemes in our sample focus on specific business model components, such as key resources or channels. For illustration, there is a difference of 49% between taxonomies and classification schemes in terms of single components addressed.

  • Domain focus. In contrast to typologies (43%) and taxonomies (50%), most classification schemes (67%) focus on a specific domain of business models and thus are less agnostic. Reflecting this back to literature, we observe a similarity with business model taxonomy research highlighting a balanced presentation of specific/agnostic results (Möller et al., 2021) and differences to other types of tools, such as business model patterns that are mostly intended to be generic (Remane et al., 2017).

  • Research approach. With regard to the underlying grounding approach, we can observe a huge difference in taxonomic tools. Taxonomies (65%) are most frequently coined by combined empirical and conceptual developments, which might be attributed to the fact that Nickerson et al.’s (2013) procedure model is the quasi-standard in the IS discipline and recommends combining both approaches. Contrarily, only 14% of typologies and 17% of classification schemes are derived with mixed approaches. Although most typologies are based on conceptual approaches (46%), we surprisingly—in contrast to underlying assumptions such as from Bailey (1994)—found many typologies (39%) that are derived empirically.

  • Grounding. Whereas most of the grounding characteristics are fulfilled rather equally across the tool types, taxonomies (84%) are more frequently grounded through scientific literature in contrast to typologies (68%) and classification schemes (58%). Also, there is a small tendency for taxonomies to draw on real-world objects (48%).

  • Building approach. In line with the empirical nature, researchers more often apply content analysis and coding procedures to develop a taxonomy (59%) in contrast to typologies (25%) and classification schemes (17%). Moreover, we can observe a trend from taxonomies to more likely apply a set of common procedures because only 15% are captured through the characteristic “other procedure.” This can be attributed to the availability of systematic methods, such as by Nickerson et al. (2013). Instead, there is a greater diversity concerning methods by typologies (36%) and classification schemes (42%).

  • Evaluation. In terms of evaluating the resulting tool, taxonomies (63%) mostly tend to follow a widely accepted design-oriented schema and thus draw on demonstrations in the form of illustrative scenarios. An approach that is less reflected in typologies (18%) and classification schemes (42%).

  • Application. Following a frequently used paper structure, most of the taxonomies (41%) derive archetypes after the building has been completed; this differs from typologies because most of them build a set of types as the main results. Also, we found that taxonomies (33%) are more often used to derive trends and a research agenda. Developing archetypes based on taxonomic artifacts is also observed in general taxonomy research (e.g., Oberländer et al., 2019).

  • Visualization. Referring to the presentation, different tendencies can be observed. While typologies are more frequently visualized in the form of two-dimensional matrices (29%) or even three-dimensional cubes (21%), taxonomies are typically represented as morphological boxes (61%) structured along with dimensions and characteristics and classification schemes through hierarchical structures (42%). This observation is in accordance with related taxonomy research that points to common representation in the form of morphological boxes (e.g., Möller et al., 2021; Oberländer et al., 2019).

Patterns for producing classification tools

With these observations, we abductively derive initial patterns for producing classification tools (see Fig. 6). Next, four of these patterns are described by means of illustrative scenarios to provide additional practical guidance for using our insights as well as indicate their applicability (i.e., proof of concept, Nunamaker et al., 2015). In accordance with scholars who emphasized considering the actual purpose of a taxonomy (Schoormann et al., 2022), our production patterns are also inspired by the primary goal tool designers aim to achieve.

Fig. 6
figure 6

Initial production patterns

Production pattern (#1)—Striving for the ideal. One cluster of aims is concerned with specifying an ideal of a business model. In the classification literature, the term “ideal type” generally refers to a common representation of a given phenomenon and does not mean perfection (Bailey, 1994). Among the seminal scholars is Max Weber, who stressed that the conceptual purity of an ideal type refers to a mental construct (German: “Gedankenbild”) that cannot be found in empirical instances (Weber, 1949). In consequence, this type does not need to exist in reality but can be used to examine empirical cases (Nickerson et al., 2013). Referring to business models, the ideal type supports giving impulses for (re-)designing models and providing orientation about common businesses within a certain field of interest. Following these underlying assumptions, classification tools from our sample tend to employ conceptual research approaches in which designers theorize, conceptualize, and deduce from theoretical inputs and scientific literature. Instead of focusing on specific elements of a business model, entire businesses are described using a few concise dimensions. As only a few dimensions are described, this tool type is often visualized in the form of two-/and three-dimensional matrices.

Production pattern (#2)—Identifying characteristics of real-world instances. In contrast to the conceptual model of business types, designers seek to create an empirical understanding of the main features of existing business models. Following prior research emphasizing the demand for studying real-life examples (e.g., Baden-Fuller and Morgan, 2010; Möller et al., 2021), this pattern captures activities for analyzing and coding a set of real-world business models to extract existing characteristics. Typically, the obtained insights are presented via a morphological box to differentiate between more abstract dimensions and single characteristics.

Production pattern (#3)—Discovering (novel) configurations. In addition to organizing the characteristics of business models (see #2), this pattern focuses on exploring valid relationships among the set of characteristics. In our sample, most of the classification tools are applied in the paper to develop archetypes to represent common configurations. These endeavors mostly cover the entire models instead of single components and are focused on certain domains. Designers can build upon available classifications—mostly empirical tools such as taxonomies—to examine configurations and present them via morphological boxes. Thereby, the actual configurations are often highlighted within the morphological box, such as by using color coding, heat maps, and connecting lines.

Production pattern (#4)—Gaining in-depth understanding. Whereas some designers target to disclose main characteristics and relationships based on a possible broad coverage of instances, other scholars aim to gain an in-depth understanding of a few businesses. To do so, they typically cover entire business models and perform case studies with selected real-world instances. They draw on empirical data from a small set of cases (e.g., collected via interviews and workshops) and additional data (e.g., collected from websites and company-specific documents). Results tend to be represented as morphological boxes, two-dimensional matrices, and tables.

Contributions and implications

Contributions to research

First, we provide a comprehensive repository of taxonomies, typologies, classification schemes, and other classification tools that serves as a starting point for enabling scholars to build upon, reuse, and accumulate knowledge on such tools (vom Brocke et al., 2020). For instance, researchers can systematically explore tools proposed for a certain domain (e.g., healthcare or finance) or for a certain technology (e.g., Blockchain) to adapt them to their context and/or extend them (see Table 1). This seems important as we see an increase in business model tools for similar domains and technologies without building upon each other.

Second, the conceptually and empirically derived analytical grid (see “Analysis of classification tools for business models”) organizes dimensions and characteristics that are important for building new tools. Researchers can pick from the grid to make more informed decisions in terms of new or adapted tools as well as consistently communicate their tool’s building decisions. Thereby, calls for understanding the underlying decisions of how a classification is designed can be responded and demands for scientific rigor can be ensured (e.g., Lambert, 2015). Moreover, both researchers and practitioners can apply the analytical grid to oppose different classification tools to select those that are best suitable for their project. In line with Gregor’s (2006) taxonomy types, this paper’s results can be positioned as a “theory for analysis” serving as a prerequisite for developing additional types of theories, such as explanation, prediction, and design and action. With the holistic view of classification tools, our work extends available insights, for instance, on the construction of empirical business model taxonomies (e.g., Möller et al., 2021) and business models in general (Groth & Nielsen, 2015). Although we have some similarities with prior literature (e.g., dimensions for data collection approaches and meta-dimensions for application scenarios), our analytical grid is not restricted to a certain type of classification and aims to incorporate the entire lifecycle of those tools from grounding, across building and evaluation, to communication and usage.

Third, we raise awareness of the multiplicity of classification tools and the fact that there are (originally) different underlying assumptions to be considered. While, for instance, taxonomies are typically empirical classifications and typologies conceptual classifications, our corpus points to a less strict grounding in which papers tend to apply combinations of the approaches. This might also be attributed to the method from Nickerson et al. (2013) suggesting to combine both types for building a taxonomy. Nonetheless, our paper shows differences among the tools to be considered when building or using a tool. In line with this, we can observe some waves with regard to the specification from classification tools: starting rather strictly by differentiating a specific grounding for a specific type (e.g., typologies are derived conceptually) towards allowing for combining several approaches (e.g., taxonomies derived both conceptually and empirically). Facing this, our comparison of tools indicates differences and similarities. Thereby, we contribute to common characteristics fulfilled by specific tool types and ultimately help to define and border the sample of tool types.

Contributions to practice

First, our repository of business model classification tools helps designers in getting inspiration from what is available out there, as well as helps to navigate through different domains and technologies (see Table 1). As previous literature emphasized the potential of recombining elements for innovation (Gassmann et al., 2014), the comprehensive collection captures and provides access to the entire solution space of business model design options. It allows one to learn from existing solutions (Remane et al., 2017) and complements existing collections for practical use, such as 45 patterns for sustainable business models (Lüdeke-Freund et al., 2018) or the pattern database for business models (Remane et al., 2017). Opening the solution space—i.e., repositories of design decisions—supports practitioners in handling complex situations (Schön, 1992), coined by a variety of configurable options, as well as facilitates idea generation (Schoormann et al., 2021).

Second, our work is intended to guide designers and practitioners in creating purposeful classification tools for business models. Besides streams of research, there are also more practice-oriented endeavors in which such tools are designed. For instance, the European Banking Authority (EBA) produced an approach to classify banks in the EU regularity framework for several reasons: (a) to understand at a macro level different business models to determine types of risks, (b) to assess how groups of banks might be affected by new regulations, and (c) to assess performance and riskiness (European Banking Authority, 2018). The analytical grid presented supports them in getting orientation about decisions for the design of classification tools. In addition to the characteristics to be considered, the initial set of production patterns describes possible configurations of these characteristics. Hence, the patterns complement prior research focusing on more process-oriented guidance for the design, such as the procedure model for the construction of business model taxonomies as proposed by Groth and Nielsen (2015).

Third, classification tools, taxonomies in particular, present a foundation for developing and advancing new business model tools (Bouwman et al., 2020). For instance, knowledge from classification tools can be implemented in software-based tools for business model innovation (Szopinski et al., 2020a, b) to provide features for configuring and analyzing business models within a certain domain or for a technology. These tools are generally expected to make valuable contributions to the practice and help make research practically usable.

Limitations and directions for future research

Although this paper provides promising insights into the field of business model classifications, there are limitations opening future research directions (RD) (see Table 2). Whereas we extracted numerous goals and intended purposes for providing and using a business model classification—ranging from understanding a phenomenon across impulses for new ideas to the development of completely new artifacts—we did not examine the relationships between goals and tool design decisions in detail. Consequently, future research is required (RD1) to shed light on the dependency between goals and classification tools. From a designer’s perspective, it is important to be aware of the range of possible goals that can be supported by a tool. By taking this into account, the why of presenting and using a tool will be clarified. Schwarz et al. (2007) provided an early overview of goals from frameworks and reviews, including insights referring to classifications as well.

Table 2 Summary of selected research directions

Another observation during the analysis is the heterogeneity of visualization approaches used to represent a tool. We found tools visualized, for example, in the form of two- and three-dimensional matrices, rather loosely tables, well-structured hierarchies, and graphical frameworks. So, which representation is the best possible one for a tool and its intended goals? (RD2) Previous research has already stressed the relevance of representation in accordance with a task to be performed or a goal to be achieved (e.g., “[…] human information processing is highly sensitive to the exact form information is presented to the senses [and] apparently minor changes in visual appearance can have dramatic impacts on understanding and problem solving performance” (Moody, 2009, p. 758)). Among the prominent theories are the cognitive fit theory (Vessey, 1991), proposing the user’s performance depends on the fit between task and presentation, as well as cognitive load theory (Sweller, 1988), arguing that learning will be enhanced by appropriate information presentation. Thus, future research is needed to better understand the fit between the representation of classification tools and the goals to be achieved.

In line with prior research (e.g., Möller et al., 2021), we found numerous grounding approaches, including the analysis of real-world businesses, qualitative data from interviews, quantitative data from online surveys, or theoretical justifications. Since appropriate and transparent grounding is an essential ingredient in research (e.g., Goldkuhl, 2004; vom Brocke et al., 2020), scholars might want to investigate specific grounding approaches for business model classification tools (RD3). Again depending on a tool’s goal, researchers might prefer drawing on theoretical grounding to build “ideal types” of business models. Contrarily, empirical grounding is more likely to be preferred when it comes down to actual options that are already out there. Regardless of the overall grounding approach, various data sources can be employed (e.g., secondary data, such as publicly available data from start-up websites, or primary data collected through own activities). Organizing specific grounding would support researchers in the early phase of building new tools. Grounding approaches can be, for instance, synthesized in the form of patterns (e.g., as known from processes (Schoknecht et al., 2020)).

Given the increasing interest in building classification tools (see chronological distribution in Fig. 1), we should also consider the appropriate evaluation of them. While scholars in general classification literature have started to pay especially attention to evaluation methods and criteria (e.g., taxonomy evaluation, Szopinski et al., 2019), there is still a large amount in our sample that does not perform validating activities. Instead of demonstrating the applicability via illustrative examples (e.g., classifying a business model instance via a designed taxonomy), advanced stages concerning proof of use and proof of value (Nunamaker et al., 2015) remain untapped. Future research should investigate specific methods and criteria for evaluation (RD4), including questions concerning what constitutes a useful business model classification tools (e.g., applicable by practice) and which criteria should be taken into account (e.g., number of generated ideas, degree of consistency in a new model, generated revenues from a new business model). Also, from a result-oriented viewpoint (i.e., application of tools) additional clarification of what is a “successful outcome” of using certain design configurations is required. In contrast to having a formal correct tool, one might want to achieve more reflected decision-making, faster design of new business models, or even higher revenue of business models.

Furthermore, only a few papers in our sample rely on and reuse already published classification tools (4/90). While prior research on adjacent artifacts, such as reference models (Legner et al., 2020), stressed that accumulated knowledge is a valuable source for descriptive and perceptive domain knowledge and derived applicable mechanisms to do so, there seem to be no comparable guidelines for this paper’s context. This might be attributed to the fact that we found only very few papers reusing existing classifications (e.g., Dehnert et al, 2021 who presented a consolidated taxonomy for data-driven businesses based on 26 IS-related taxonomies) or that reusing (wrongly) tends to be perceived as an activity that does not lead to novelty. However, given the growing body of classification tools for diverse domains and technologies, their accumulation and evolution should be taken into account (RD5). In doing this, for instance, more efficient development can be achieved (e.g., by grounding a new tool in well-accepted existing ones) and available knowledge can be verified or extended. Future research can pick up this idea to derive mechanisms helping to reuse knowledge captured by available classification tools and examine the actual value of reusing this to motivate future researchers.

Lastly, our paper has some methodical limitations: Although we aimed for a comprehensive sample of tools, the results are restricted to the analyzed literature, including its search strategy (e.g., search items and sources). For instance, while we focus on frequently used terms to describe this particular class of artifacts, using adjacent terms, such as business model pattern (Lüdeke-Freund et al., 2018; Remane et al., 2017), might lead to additional insights or help to validate our findings. The analytical grid is informed by conceptual work as well as empirical refinements. Whereas we transparently reported on each of the iterations and discussed the findings within the author team, one might find additional characteristics that need to be presented. Following the idea of extendible taxonomies and knowledge accumulation, we however invite others to validate and/or complement our findings. Also, we primarily focus on the status quo and the differences between types of tools. The mostly descriptive insights can be used to derive more prescriptive knowledge and guidelines on how to build classification tools. Finally, there might be a bias because most papers are concerned with taxonomies. To take this into account, we perform the comparison with fulfillment percentages. Despite this, we hope to broaden the discourse of business model classifications and outline some potentials and shortcomings that should be reflected during the building of new classification tools.

Conclusion

In this paper, we set out to explore the vast landscape of classification tools for business models. These tools have received great interest from both academia and practice alike and are assumed to support various activities, such as classifying businesses, understanding design options, and getting impulses for business model innovation. Given the growing body of classification tools, we can observe great heterogeneity and ad-hoc development decisions in terms of what type of tool should be built to achieve certain goals. Also, dependent on the choice of tool, there are underpinning assumptions concerning aspects, such as grounding and evaluation. To structure this vast field, we present an overview of classification tools proposed for business models, an analytical grid in the form of a taxonomy, and a systematic comparison of different tool types. Our work is intended to complement available business model research and allows researchers to build upon knowledge captured by available tools, select suitable tool types for their individual projects, and make informed design decisions for new tools. Ultimately, because we have already experienced situations in which classification tools serve a valuable input to adapt, innovate, and create new business models, we hope to contribute to leveraging the full potential of those tools.