Skip to main content

Disentangling Crowdfunding from Fraudfunding

Abstract

Fraud in the reward-based crowdfunding market has been of concern to regulators, but it is arguably of greater importance to the nascent industry itself. Despite its significance for entrepreneurial finance, our knowledge of the occurrence, determinants, and consequences of fraud in this market, as well as the implications for the business ethics literature, remain limited. In this study, we conduct an exhaustive search of all media reports on Kickstarter campaign fraud allegations from 2010 through 2015. We then follow up until 2018 to assess the ultimate outcome of each allegedly fraudulent campaign. First, we construct a sample of 193 fraud cases, and categorize them into detected vs. suspected fraud, based on a set of well-defined criteria. Next, using multiple matched samples of non-fraudulent campaigns, we determine which features are associated with a higher probability of fraudulent behavior. Second, we document the short-term negative consequences of possible breaches of trust in the market, using a sample of more than 270,000 crowdfunding campaigns from 2010 through 2018 on Kickstarter. Our results show that crowdfunding projects launched around the public announcement of a late and significant misconduct detection (resulting in suspension) tend to have a lower probability of success, raise less funds, and attract fewer backers.

It’s a credit to Kickstarter and the collective power of the crowd to identify fraud….

-CNN Money, June 17, 2013Footnote 1

If you utter the word “crowdfunding” in front of a dusty old-fashioned securities lawyer, make sure you have a fully charged defibrillator on hand. Perhaps a fully equipped contingent of ER doctors and nurses. It won’t be pretty.

-Financial Post, July 31, 2013Footnote 2

Introduction

Reward-based crowdfunding (hereafter, crowdfunding) has emerged in recent years as a catalyst for entrepreneurship, an important new means of financing early-stage ventures, and a door opener for successful financing. As an alternative solution to the capital gap problem for start-ups, crowdfunding can complement or substitute for other sources of financing, such as venture capital or angel investors. Early-stage ventures have benefited enormously from its availability, and its positive impact on new firm creation and future venture capital investments has become increasingly evident (Assenova et al., 2016; Sorenson et al., 2016). This highlights the importance of investigating any issues that could negatively affect the crowdfunding market and endanger its long-term existence.

Trust between counterparties in any economic exchange is vital (Brockman et al., 2020; Hain et al., 2016). Therefore, crowdfunding adoption depends significantly on establishing trust in the market. Equity markets have demonstrated the fragility of trust, and how a breach can not only negatively affect specific firms (Davidson & Worrel, 1988), but result in the collapse of entire market segments (Hainz, 2018). The concept of the Trust Triangle was recently adapted for financial markets and fraud (Dupont & Karpoff, 2019). According to this framework, firms can ex ante invest in accountability and build trust through three main channels: first-party, related-party, and third-party enforcement (the first, second, and third legs of the Trust Triangle). The three legs are not equally effective in a crowdfunding context. The crowdfunding market is still in its infancy, and campaign creators have no legal obligation, for example, to provide income statements or profit and loss accounts to the platform or regulatory bodies. This suggests somewhat weak third-party enforcement in the market. Backers must trust campaign creators to use the funds obtained to deliver on their promises (first-party enforcement), and trust the platform to conduct thorough pre-screening of projects before they are posted (related-party enforcement). Thus, one of the core elements of a functional crowdfunding market is trust among backers, campaign creators, and the platform.

Incidents of fraudulent behavior by campaign creators, and the inactivity of platforms to prevent them, can negatively affect the open-mindedness of crowdfunding backers. Therefore, it is important to document fraudulent cases to (1) assess which factors signal weak first-party enforcement and help predict subsequent fraud, and (2) identify incidents that lead to a breach of trust associated with weak related-party enforcement and analyze their consequences.

In the first part of our empirical analyses (Determinants of Fraud), we categorize fraudulent behavior based on Kickstarter campaign fraud allegation reports from 2010 to 2015. We follow these cases until 2018 to assess the outcomes. We conduct a methodical search of media reports, and use specific criteria to finalize a sample of campaigns associated with fraudulent behavior. Using this sample and multiple matched samples of non-fraudulent campaigns, we find that fraudsters are less likely to have engaged in prior crowdfunding activities and to use social media, such as Facebook. We also find that fraudsters tend to offer a higher number of enticements through pledge categories, and to choose longer campaign durations. Finally, based on readability indices, fraudsters are more likely to provide easier-to-read campaign pitches.

In sum, we identify which factors signal first-party enforcement and project quality, and our results illustrate their relevance in predicting subsequent fraudulent behavior.

In the second part of our analyses (Platform-wide Consequences of Fraud), we document that a large public crowdfunding scam can have an economically significant negative impact on concurrent projects. Therefore, a few incidences over a short period of time may cause a tremendously negative spillover effect.

We collect data on more than 270,000 campaigns from 2010 through 2018. As a result of Kickstarter “late” suspensions (which may signal weak related-party enforcement and inefficient platform pre-screening), the probability of reaching goal amount for campaigns launched around the same date is about 6.38% lower. On average, all else being equal, the pledged amount decreases by 9.6%.

Backers’ trust in platform integrity is especially vital because platform revenue is a percentage of raised amounts, leading to a potential agency problem. Backers may react negatively if they perceive suspended campaigns as not only first-hand evidence of weak legal enforcement, but also inefficient platform scrutiny. We highlight the importance of related-party enforcement and platform scrutiny before projects are posted, especially since platforms do not generally enforce accountability once funds are transferred to creators (e.g., by charging insurance fees proportional to campaign overcontributions).

Our paper is related to the growing literature on crowdfunding that, to date, has focused primarily on determinants of funding success (see, e.g., Agrawal et al., 2015; Ahlers et al., 2015; Belleflamme et al., 2013; Coakley & Lazos, 2021; Colombo et al., 2015; Mollick, 2014; Rossi et al, 2021; Vismara, 2016). Prior research has explored late deliveries (Mollick, 2014), project or firm failures (Hornuf et al., 2018; Signori & Vismara, 2018), factors affecting backer trust (Liang et al., 2019), mechanisms to deter misconduct (Belavina et al., 2020), and the impact of pro-social framing, altruism, and self-interest on crowdfunding success (André et al., 2017; Berns et al., 2020; Defazio et al., 2020). Other papers have examined the role of securities regulation in equity crowdfunding markets (Bradford, 2012; Hornuf & Schwienbacher, 2017), return on investment in equity crowdfunding (Hornuf et al., 2018; Signori & Vismara, 2018), and the dynamics of crowdfunding project support over time (Hornuf & Schwienbacher, 2018).

We contribute to the entrepreneurial finance literature by identifying specific campaign- and creator-related factors that correlate with fraudulent behavior in the crowdfunding market. We also document the negative effect of perceived weak platform scrutiny on the success of concurrent campaigns. Our study opens avenues for future research on crowdfunding fraud and its effects by developing and integrating new fraud detection models in an entrepreneurial finance setting (see, e.g., Allen et al., 2021; Perez et al., 2020).

The remainder of this paper is organized as follows. The next section develops our hypotheses. We then introduce the data and outline our methodology. The “Empirical Results” section presents univariate and multivariate empirical analyses, as well as several robustness checks. The final section concludes and discusses implications for research, practice, and policy.

Theory and Hypotheses

Dupont and Karpoff (2019) explain the importance and fragility of trust in the process of economic exchange. They introduce a framework with three mechanisms to provide discipline, deter opportunistic behavior, and build sufficient trust.

The equity markets have shown that fraudulent activities can result in sharp declines in firm performance and share prices (Karpoff et al., 2008; Rezaee, 2005), but also in the collapse of entire market segments. In 1997, the market segment Neuer Markt was established on the German stock exchange, with the goal of financing innovative small and medium-sized growth companies. After a strong start, the segment reached a market capitalization of $234 billion (Hainz, 2018). However, several incidents of corporate fraud and misconduct eroded its reputation, and it was closed only 6 years after launch, down 90% from its market peak. Similarly, since crowdfunding is a new phenomenon, fraud cases can be very destructive and lead to spillover effects on future campaigns.

As mentioned earlier, the three legs of the “Trust Triangle” are: (1) first-party enforcement (personal ethics, integrity, culture); (2) related-party enforcement (market forces and reputational capital); and (3) third-party enforcement (laws, regulations, regulators). Legal enforcement by government agencies within the crowdfunding market has been relatively lax, and regulators have limited capacity for enforcement.Footnote 3 Thus, project creators’ integrity and platform enforcement are of paramount importance in determining backers’ trust level. It is even more important since platform revenue is directly related to the amounts raised (usually a fixed percentage), and amounts over goal go to creators.

After a campaign ends, and funds are distributed, there is a risk that creators will cease working on the venture, or that they will use the funds to extract private benefits, creating a moral hazard problem (Hainz, 2018). This risk can be reduced by writing complete contracts, typically not feasible in this context, or by strengthening first- and related-party enforcement.

We focus on the first leg of the Trust Triangle and signals of project quality to develop Hypotheses 1–3. We aim to identify which creator and campaign characteristics are perceived as credible signals of first-party enforcement.

Economists and psychologists suggest various reasons why individuals engage in fraud. In a crowdfunding context, backers can analyze campaign pages on the platforms and draw their own expectations about quality and fraud probability. For example, they can read campaign descriptions and view campaign videos. All of this information clearly helps reduce asymmetric information, but it does not eliminate it. Fraudulent campaign creators, on the other hand, have a clear incentive to increase information asymmetries and hinder backers from distinguishing fraudulent projects. Therefore, it is necessary to identify creator and campaign features that can ex ante serve as signals of first-party enforcement and that are difficult or costly to mimic. We posit that fraudsters may implement symbolic actions to build trust and increase their chance of success (e.g., Zott & Huy, 2007).

In the realm of crowdfunding, we identify three broad themes where backers could theoretically identify signals of stronger first-party enforcement based on available information: (1) creator(s)’ characteristics/background, (2) creator(s)’ social media affinity, and (3) campaign characteristics.

Social psychologists argue that, even when people are acting dishonestly, they nevertheless remain concerned about maintaining a positive self-image (Gino et al., 2009; Jiang, 2013; Mazar et al., 2008). This brings us back to the first leg of the Trust Triangle, which suggests that personal ethics play an important role when campaign creators commit fraud. Mann et al. (2016) focus on non-violent crimes, and find that internal sanctions provide the strongest deterrents. The effect of legal sanctions was weaker and varied across countries. As a result, crowdfunding fraud may not only follow an economic calculation by a project creator, it may also reflect personal attitudes and reputation.

For example, we do not generally expect creators with a rich history of successful campaigns to suddenly launch fraudulent projects. As Diamond (1989) notes, creators build their reputations by engaging in the market more frequently, and could suffer large losses from misconduct. A history of multiple honest campaigns therefore signals experience, which may decrease the probability of future dishonest campaigns. Similarly, creators who have previously backed other crowdfunding projects are likely to believe in the overall idea of crowdfunding (Cumming et al., 2019b). This can make it difficult for them to reconcile the idea of leading a scam. However, we note that backing multiple projects is easier and less costly for fraudsters to mimic, as they can contribute small amounts to multiple campaigns to signal prior activity. In sum, we predict a negative relationship between crowdfunding fraud and the intensity with which a creator uses crowdfunding as a backer or a creator (see Hypothesis 1).

Hypothesis 1

(Creator(s)’ Characteristics and Background) Crowdfunding fraudsters are less likely to have engaged in prior crowdfunding activities.

Backers can also easily screen creators’ social media activities. If personal ethics and a positive self-image are important, fraudsters may avoid the use of social media because it can facilitate fraud detection. Furthermore, an observable social media presence may indicate a creator has more to lose from cheating in terms of social connections, and could be subject to more intense monitoring. Similarly, to earlier work on the effect of media on corporate social responsibility (El Ghoul et al., 2019), we theorize that a social media presence can lower the risk of crowdfunding fraud. Moreover, early backers are often friends and family, which is a specific feature of non-equity crowdfunding (Agrawal et al., 2015; Colombo et al., 2015). Arguably, this could jeopardize the positive self-image of a campaign creator (Shalvi et al., 2015), and make committing outright fraud harder.

Lin et al. (2013) show that, in peer-to-peer lending, borrowers’ online friendships act as signals of credit quality and lead to a higher probability of successful funding. However, fraudsters may manipulate social media information, by, e.g., using phony Facebook pages. Hence, it is unclear whether elaborate fraudsters have fewer or more social media contacts, and how difficult it is to mimic this feature. The same is true for using fake links on campaign websites that lead to other fake websites purporting to support the trustworthiness of a campaign. This highlights the importance of the first leg of the Trust Triangle. Thus, we predict a negative correlation between social media use and fraud.

Hypothesis 2

(Social Media Affinity) Crowdfunding fraudsters are less likely to have a social media presence, and to provide fewer external links.

Finally, Campaign Funding and Reward Structure and Campaign Description Details, which we group together as Campaign Characteristics, can provide credible signals of first-party enforcement and project quality (Spence, 1973). Shailer (1999) develops a theoretical model showing that the signals entrepreneurs provide to lenders (through information or actions) may assist them in allocating ex ante default probabilities based on lenders’ prior knowledge of group characteristics. We aim to identify and determine the value of such signals in crowdfunding, and gauge how they correlate with fraudulent behavior.

We observe that more confident creators restrict funding period duration because they believe their projects will be funded rapidly. But fraudsters are less likely to send credible signals of quality. So they may tend to extend funding period duration to raise as much capital as possible. Longer funding periods may make detection more likely, and increase the risk of not receiving funds. Consequently, it remains an empirical question as to whether a longer funding period reduces or increases the probability of fraud. But we believe short duration is a credible signal of project quality. We, therefore, derive Hypothesis 3.A as follows:

Hypothesis 3.A

Crowdfunding fraudsters are more likely to implement longer funding periods.

While backers may detect fraud once, e.g., a creator fails to deliver a product, the ultimate prosecution of the scam may be the most important factor to a fraudster. As noted above, the smaller the amount invested by backers, the less likely they will be to engage in litigation. Consequently, fraudsters may simply target as many backers as possible who can only contribute small amounts. One common method is to create many different pledge categories, to smooth the way for small-size contributions. We, therefore, derive Hypothesis 3.B as follows:

Hypothesis 3.B

Crowdfunding fraudsters are more likely to offer smaller minimum pledge allowance choices.

Research shows that perpetrating securities fraud in publicly traded firms is easier when confusion exists among investors (Fischel, 1982; Perino, 1998; Simmonds et al., 1992). Research on the manipulation of stock markets has long explored so-called “pump and dump” schemes. These schemes involve acquiring long positions in stocks, and then heavily promoting them online or by spoof trading (deleting orders before execution to keep up appearances of an active book). In this way, fraudsters encourage other investors to purchase the stocks at successively higher prices, and then they sell their own shares. In a similar way, crowdfunding fraudsters can heavily promote a campaign by offering many project enticements with various reward levels (Belleflamme et al., 2014; Mollick, 2014). Moreover, because they do not intend to ship anything or continue communicating with backers, they are not constrained by excess demand or other costs later on. We, therefore, derive Hypothesis 3.C as follows:

Hypothesis 3.C

Crowdfunding fraudsters are more likely to offer a larger number of reward/pledge categories.

Finally, in crowdfunding markets, fraudulent campaign creators may try to increase information asymmetries to make it more difficult for backers to differentiate between scams and worthwhile projects. The main way to convey information about a project is through the description, which is normally a few thousand words (Cumming et al., 2019a). Crowdfunding fraudsters are, therefore, less likely to provide a professionally worded description in order to foster confusion and avoid detection. In contrast, professional entrepreneurs are likely to use campaign descriptions to signal quality.

It is complicated to accurately and professionally describe a product that does not exist. This is in line with findings by Siering et al. (2016), who show that linguistic and content-based cues in static and dynamic contexts can help predict fraudulent crowdfunding behavior. Parhankangas and Renko (2017) show that certain linguistic styles increase the probability of success of social campaigns, such as, e.g., those that make the campaign and creator(s) more relatable. Alternatively, simpler descriptions (without the need for specialized knowledge to understand them) may help fraudsters target a less educated crowd. We, therefore, derive Hypothesis 3.D as follows:

Hypothesis 3.D

Crowdfunding fraudsters are more likely to use simply worded campaign descriptions (i.e., lower formal education required to understand the description on a first read).

Next, to develop Hypothesis 4, we focus on the second leg of the Trust Triangle. In general, reward-based crowdfunding platforms do not conduct sophisticated background checks or due diligence (in contrast to, e.g., equity crowdfunding platforms). However, Kickstarter employs a “Trust & Safety” team to assess campaigns, and they can recommend suspensions for rules violations. Note that suspended campaigns do not necessarily denote fraud. But the platform-wide consequences of observed incidences of misconduct detection, proxied for by campaign suspensions, are a priori not clear and thus worth investigating empirically.

For example, backers who observe campaigns being suspended may infer that related-party enforcement works. On the other hand, backers who learn that fraudulent campaign creators have already conducted many scams prior to suspension may infer the platform cannot ensure accountability and that the pre-screening process is inefficient. Hence, large-scale campaign suspensions that have already attracted many backers, raised large amounts of funds, and are close to their scheduled deadlines can substantially weaken backers’ confidence in their own fraud detection skills, as well as in related-party enforcement. Weaker trust may cause concurrent crowdfunding campaigns to face difficulties raising capital and achieving funding goals. We, therefore, derive Hypothesis 4 as follows:

Hypothesis 4

(Platform-Wide Consequences of Fraud): Campaigns posted around a late and visible suspension of a successful crowdfunding project have a lower probability of success, tend to raise less funds, and attract fewer backers.

Data

We divide our data collection into two parts. First, we categorize fraudulent campaigns, derive the respective fraud and matched non-fraud samples, and examine the factors associated with a higher likelihood of observing fraudulent behavior. Second, we construct our sample for studying platform-wide consequences of breaches of trust. Variable definitions are in Table 1.

Table 1 Variable definitions

Categorizing Fraudulent Behavior in Crowdfunding

A legal definition of fraud in crowdfunding is not simple to operationalize for an empirical study. This is because, to date, few cases have been tried by an ordinary judge. In a theoretical context, Belavina et al. (2020) note that platforms can leave backers exposed to two risks: (1) funds misappropriation, where entrepreneurs run away with backers’ money, and (2) performance opacity, where product specifications are misrepresented. Therefore, we focus on industrywide definitions of detected fraud and suspected fraud (see, e.g., Crowdfund InsiderFootnote 4 for an overview). We next describe our categorization of fraud in more detail based on media reported cases, resulting in a sample of 193 fraudulent campaigns.

The first category, detected fraud, includes (1) pre-empted fraud, when a supposedly fraudulent campaign is reported in the media but is either suspended by the platform or canceled by the creator before money is transferred to the creator’s account. Both typically result from backer complaints to the platform provider, or from online postings warning that the campaign carries a risk of fraud; and (2) attempted fraud, when fraud was not originally detected during the campaign’s funding period, and campaign creators obtain the amounts raised. In this case, after funding completion, backers may find that, e.g., creators misrepresented material facts, used intellectual property to which they do not hold legal rights, or that the project is an outright fake.

The second category, suspected fraud, occurs when a supposedly fraudulent campaign is reported in the media, and (1) three specific conditions (described below) are met simultaneously, or (2) the rewards are changed to the disadvantage of backers (condition 2). The three conditions are: Rewards are delayed by more than 1 year from the promised delivery date (condition 1a); the creators cease credible communications with backers, such as, e.g., updates on the campaign web page, for at least 6 months after a promised delivery date (condition 1b); and rewards are not delivered, and backers have been neither partially nor fully refunded as of December 31, 2018 (condition 1c).

Detecting campaigns where rewards were changed significantly can be accomplished by studying news articles on a particular campaign, or by reading comments posted by backers after rewards delivery. However, if delivery is overdue, it is more difficult to distinguish between fraudulent projects and those that failed or experienced normal unforeseen setbacks.

To overcome this problem, we categorize campaigns where rewards are delayed for at least 1 year after the delivery date as suspected fraud. But this is true only if (1) the creator has also not posted meaningful updates for at least 6 months after the originally promised date,Footnote 5 (2) the promised reward is not delivered until the end of our observation period, and (3) backers were not at least partially refunded.Footnote 6 To classify projects as suspected fraud, we tracked all campaigns until December 31, 2018. If any of the three criteria were met, we exclude the project from our suspected fraud sample.Footnote 7 We acknowledge that extreme incompetence of project creators can be an alternative explanation for campaigns considered fraudulent. However, failing to provide explanations and updates is a form of serious misconduct.

Note that there are other forms of crowdfunding fraud that are outside the scope of this article, because they are not possible to detect in a comprehensive manner. These include so-called stillborn fraud, where a potential fraud campaign is rejected by the crowdfunding platform before it is launched. Fraud is also not necessarily limited to project creators; there have been cases of reported fraud by crowdfunding backers, and even by some platforms themselves.Footnote 8

There is no commercial database available for fraud cases in crowdfunding, but our base media reports sample covers all actual and potential fraud campaigns reported on a website called Kickscammed (http://kickscammed.com). Kickscammed is an independent site where the crowd can report suspicious or fraudulent crowdfunding activities. It is not linked to Kickstarter.

Table 2 shows the steps in constructing our fraudulent campaign sample. As of April 30, 2016, we were able to identify and confirm 181 fraud cases for the 2010–2015Footnote 9 period that were reported on Kickscammed and met our criteria for detected or suspected fraud. However, Kickscammed does not cover all instances of fraud on Kickstarter, so we complement our dataset with a news search using Google, Factiva, and LexisNexis. Our initial fraud dataset is, therefore, comprised of 200 fraudulent campaigns. After excluding 7 campaigns for which no data were available, our final sample consists of 193 fraudulent casesFootnote 10 (see Table 2, Panel A).

Table 2 Derivation of fraudulent campaigns’ sample (“determinants of fraud” analyses)

Panel B of Table 2 illustrates the differences in the number of identified fraud cases across categories. We mark 44 campaigns as detected fraud (19 “Pre-empted” and 25 “Attempted”), and 149 as suspected fraud (5 “Rewards Changed” and 144 “Rewards Not Delivered”).Footnote 11 Our identified fraudulent campaigns (within the 2010–2015 period) seem low in comparison to the total number of projects on Kickstarter. This raises the question of whether we are only observing the tip of the iceberg, or whether fraud in crowdfunding is overly difficult to detect.

Following Hainz (2018), we find multiple major reasons why crowdfunding fraud may not be observable. Hainz (2018) underscores that (1) the efficiency of the crowd in detecting fraudulent campaigns is relatively high (most backers have experience from prior campaigns); (2) the effectiveness of platforms such as Kickstarter at filtering out fraudulent projects before they are posted is also relatively high; (3) non-reporting of fraudulent campaigns is highly likely, especially when a campaign is unsuccessful and no money has changed hands, because neither backers nor platform providers have a high incentive to report it; and (4) backers of successful but fraudulent campaigns may not bother to report fraud if they contributed only a small amount.

Determinants of Fraud

In order to obtain a non-fraud control group with similar characteristics, we apply a propensity score matching (PSM) algorithm. We match our fraudulent campaigns only on campaign-related demographic characteristics (year, country, campaign category) and goal amount to ensure we do not select for other factors that could potentially explain fraudulent behavior.Footnote 12

We implement a nearest-neighbor one-to-one fraud and non-fraud matching without a replacement option to ensure the random component of the sample. We then construct our sample for the main analyses. As a robustness check, we provide results based on one-to-one matches (with replacement option) and one-to-two matches (with and without replacement options). We consider 386 crowdfunding campaigns (193 one-to-one pairs of matched fraud and non-fraud campaigns) in our main analysis. We checked the campaign web pages of all non-fraud matches to ensure that none were suspected of fraud. We hand-collected information from Kickstarter on nineteen explanatory campaign variables, based on information from campaign web pages or social media pages associated with the campaign/creator.

Platform-Wide Consequences of Fraud

Next, we study platform-wide consequences of breaches of trust. To this end, we use an event study-like setting to demonstrate whether late suspensions by Kickstarter, which we classify as large public scams based on four criteria, negatively affect the success of other campaigns launched around the same time. One challenge is to identify the “announcement date” that the fraud became visible to the community (i.e., potential backers). We use Kickstarter’s suspension dates for large successful campaigns associated with misconduct. Note again that there is no legal proof that suspended cases constitute outright fraud. If Kickstarter’s “Trust & Safety” team uncovers evidence that a campaign is in violation of its rules, the campaign is suspended, according to Kickstarter’s procedures.Footnote 13

We scraped data on all suspended campaigns using the “Explore” function of Kickstarter, which resulted in 1760 campaigns with suspension dates between January 1, 2010, and September 30, 2018.Footnote 14 Table 3 provides an overview within each main category for the respective year (Panel A) and pledged dollar volumes (Panel B).Footnote 15 We use this population to determine the most severe and visible scam campaigns that attracted backers, as well as their “announcement dates,” as we describe below.

Table 3 Derivation of suspended campaigns’ sample (“consequences of fraud” analyses)

We aim first to identify “late” suspensions. We posit that, if Kickstarter suspends a campaign in its early stages, this should be a positive signal to the crowd of related-party enforcement. Thus, we should not see a negative effect on other projects’ funding or on the market as a whole. Second, we aim to ensure that such announcements are as visible as possible to the crowd. We follow a two-step procedure to identify suspended campaigns (ensuring late suspension and visibility) with the highest negative platform-wide consequences, which can be regarded as large, public scams.

Late suspension criteria: First, at least 20% of the allegedly fraudulent campaign’s duration must have passed. Second, there must be < 1 week remaining to campaign end. These criteria ensure that the suspension was perceived as “late” in the crowdfunding community, and could in fact impact the funding success of other non-fraudulent campaigns. The first criterion reduces the total number of 1760 suspended campaigns by 859, and the second by 689, leaving us with 212 (see Table 3, Panel C).

Visibility criteria: Unfortunately, there is no direct measure of campaign visibility available, but we argue that it correlates highly with the number of pre-suspension backers in a campaign. The third criterion (that a suspended campaign must have attracted at least 1000 backers) is important, because 580 campaigns were suspended before a single backer contributed. If campaigns are suspended by Kickstarter before anyone can contribute, backers may believe related-party enforcement has worked. In that case, we do not expect to observe any negative impact on platform-wide funding activities.

We use another proxy for campaign visibility, pledged amount before campaign suspension. Therefore, we require, as a fourth criterion, that at least USD $10,000 have been contributed to a campaign before suspension. The criterion for the number of backers reduces the number of suspended campaigns by another 198, while the contribution requirement did not result in any further exclusions (see again Table 3, Panel C). In sum, based on the four criteria, we identified fourteen suspended campaigns that may have had a sizable negative platform-wide effect (see Table 3, Panel D).Footnote 16

We then collect comprehensive data from Kickstarter for all campaigns with a goal amount of at least USD $100 (excluding very small donation-like campaigns), launched on or after January 1, 2010 and ending on or before December 31, 2018, and either successful/funded (reached goal amount) or unsuccessful/failed (did not reach goal amount).Footnote 17 Our scraping procedure identified 271,971 unique campaigns within 15 main categories on Kickstarter.

Table 4 provides an overview, showing the number of launched campaigns for each year within the main categories (Panel A), their respective success rates (Panel B), and summary statistics (all values are converted to USD using Kickstarter’s static USD rate). It also shows the correlation matrix for all variables considered in the analyses of platform-wide consequences of fraud (Panel C).

Table 4 Overview of Kickstarter sample (2010–2018)

Methods

We first specify a baseline regression model for the determinants of fraud analyses using three sets of characteristics: creator’s characteristics/background, social media affinity, and campaign characteristics (campaign funding and reward structure, as well as campaign description details). We apply a logistic regression model to examine the determinants of our dependent variable, Fraud, which equals 1 if the campaign is in our fraud sample, and 0 otherwise.

The non-fraud campaigns are based on a PSM approach using available demographic variables. This ensures that our control sample is not affected differently by national regulations, culture, project category, size, or time period of crowdfunding (Aggarwal et al., 2016; Attig et al., 2016; El Ghoul et al., 2016). The structure of our baseline logistic regression model is as follows:

$${Fraud \left(0/1\right)}_{i}= \alpha +{\sum\limits_{j} }{\gamma }_{j}\times {Creator(s){^{\prime}}\; Characteristics/Background}_{j}+{\sum\limits_{k} }{\xi }_{k}\times {Social\; Media\; Affinity}_{k}+{\sum\limits_{l} }{\varphi }_{l}\times {Campaign\; Funding\; and\; Reward\; Structure}_{l}+{\sum\limits_{m} }{\phi }_{m}\times {Campaign\; Description\; Details}_{m}+{\varepsilon }_{i}.$$
(1)

For each campaign i, the main explanatory variables are the j variables in the creator(s)’ characteristics/background block (Creator-Backed Projects and Creator-Created Projects). The k variables in the social media affinity block include # External Links and Facebook. The l variables in the campaign funding and reward structure block include Duration, Min. Pledge Amount, and No. of Pledge Categories. Finally, the campaign description details block includes m variables, the ARI, and Video Pitch. We do not include year, country, or campaign category fixed effects because our samples have been initially matched and are balanced on those variables. See Bertoni et al. (2011), Grilli and Murtinu (2014), and Lee et al. (2015) for time variation and access to finance. We do use robust standard errors, which are one-way-clustered by campaign categories in all regressions, because residuals can be correlated within certain categories (Thompson, 2011).

We run several robustness checks, where we (1) use different nearest-neighbor matching procedures (one-to-one and one-to-two, with and without replacement options) for our main analysis, and (2) operationalize our theoretical concepts with different variables and alternative proxies for creator(s)’ characteristics/background (Waiting Time (months), Formal Name, and Natural Person), social media affinity (Facebook_Page, Facebook_Personal, LinkedIn, Log (FB Connections)), and project description readability indices (CL, FKG, and GF).

Note that our model does not aim to specify the forecasted probability of a campaign being fraudulent for a given set of explanatory variables. This would be extremely difficult to achieve. King and Zeng (2001b) explain that, in a case–control design, where the fraction of failure in the data differs from that in the population, the estimated probabilities (i.e., forecasts) are biased and need prior correction. King and Zeng (2001a) posit that, for logit models with unknown sampling probability, as in our set-up, the constant term is biased but the parameter estimates remain largely unbiased. Therefore, prior correction is applied only to the constant term. However, the calculation of the correction term, which is subtracted from the estimated constant term, requires knowledge of the underlying probability of fraud in the population. This is not known to us, because false negatives in the population may prevent us from calculating the correction. Thus, we are only interested in the coefficients of the independent variables that have been shown to be unaffected and that are generalizable to the population (King & Zeng, 2001a, 2001b).

Second, we present the methodology related to our platform-wide consequences of fraud analyses. We require a goal amount of at least USD $100 to avoid micro campaigns. To determine whether the dynamics differ for campaigns that are more likely to be related to entrepreneurial activities, we require a goal amount of at least USD $10,000, and we repeat the analyses (see Mollick and Nanda (2015) for a similar argument). The structure of our logistic (and OLS) regression model is as follows:

$${Success}_{i}= {\beta }_{0}+{\beta }_{1,a}\times {Fraud\; Period}_{i}+{\beta }_{1,b}\times {Post\; Fraud}_{i}+{\beta }_{2}\times {Duration}_{i}+{\beta }_{3}\times {Waiting\; Time}_{i}+{ \beta }_{4}\times {Featured}_{i}+{ \beta }_{5}\times {Log\; Goal}_{i}+{ \beta }_{6}\times {Daily\; Activity}_{i,a}+{\phi }_{a,b}+{\varphi }_{a,b}+{\lambda }_{a}+{\theta }_{a}+{\xi }_{a}+{\varepsilon }_{i},$$
(2)

for each campaign i, Success represents the dummy variable Funded (Logistic), the variable Log Pledged (OLS), or the variable Log Backers (OLS). Our main variable of interest is (1) the dummy variable Fraud Period, which equals 1 if the campaign’s start date is within 14 days (\(\mp\) 14) of the late suspension announcement, and 0 otherwise, or (as an alternative proxy) (2) the dummy variable Post-Fraud, which equals 1 if the campaign’s start date is within 14 days of the late suspension announcement, and 0 if it ended before that (we omit campaigns with other start/end dates).

If Hypothesis 4 is supported, we expect to find negative coefficients for \({\beta }_{1,a}\) and \({\beta }_{1,b}\) for all three success measures. We control for the three main variables (i.e., Duration, Featured, and Log Goal), which are also used in Mollick (2014) and have a significant influence on campaign success, plus Waiting Time to proxy for a creator’s experience on the platform. We also introduce a new control variable, Daily Activity, to proxy for the level of competition while a project is live.

Classifying a campaign as posted within a fraud period is not as straightforward as for an ordinary event study. A campaign suspension is not typically a “1-day” event, because, e.g., campaigns launched before the suspension date that have a deadline scheduled for after it are affected by the suspension, as are those launched closely afterward. We define a dummy variable “Fraud Period” for each of the 271,971 campaigns that equals 1 if the campaign is launched within 14 days before/after any of the identified suspension dates.Footnote 18 We choose 14 days because most campaigns have a duration of about 30 days. We change the definition from \(\mp\) 7 to \(\mp\) 29 days, instead of \(\mp\) 14, for the robustness checks.

When using the classification Fraud Period to identify campaigns most likely to be affected by a suspension announcement, we include a series of fixed effects: campaign category (\(\phi\)), year (2010–2018) (\(\varphi\)), month of year (January–December) (\(\lambda\)), day of month (first day to last day of respective month) (\(\theta\)), and day of week (Monday–Sunday) (\(\xi\)) to capture dynamics in different categories, as well as any time effect that may influence crowdfunding (and platforms) in certain years, months, or days. We also include Daily Activity (average daily number of “live” projects during a campaign’s lifetime). This variable captures the effects on campaign success that are directly related to platform activity but have not been picked up by the series on fixed effects. This is highly important. Intuitively, we expect that competition intensity (the number of live campaigns on the platform) is inversely correlated with campaign success (see Chen 2021 for empirical evidence).

For the alternative classification, Post-Fraud, we determine a direct pre- vs. post-fraud comparison in success levels of a subsample of the projects posted around the identified dates. We also include fixed effects: campaign category (\(\phi\)) and year (2010–2018) (\(\varphi\)). We use clustered robust standard errors based on campaign categories in all regressions. The alternative classification Post-Fraud allows for a more direct comparison because it has fewer observations and substantially reduces the need to control for periodic fixed effects.

Empirical Results

We use two different samples to study (1) determinants of fraud (credible signals of first-party enforcement), and (2) platform-wide consequences of perceived weak related-party enforcement. We then check for robustness by examining the impact of signals of first-party enforcement (and project quality) on project success, especially when related-party enforcement (platform scrutiny) is perceived to be weak.

For studying “determinants of fraud,” it is important to have a high level of certainty that the identified campaigns are fraudulent, or at least largely perceived as such. This is why we do not include all campaigns reported on Kickscammed or in the media in our dataset. Instead, we check outcomes, e.g., whether the promised product was finally delivered or any communication attempted, to distinguish “failed” from “fraudulent” projects. To study measurable platform-wide consequences, it is critical to identify campaigns suspended later than expected, of larger size, with higher numbers of backers, and with higher pledged amounts in order to ensure that other backers (besides those directly affected) could have reacted to a suspension announcement. Therefore, we conducted the filtering process described previously to gauge which campaigns had the most damaging effects on the market.

Determinants of Fraud

We begin by discussing our results in a univariate setting, and then focus on multivariate analyses to include multiple possible determinants of fraud simultaneously. Table 5 (Table 10 in the Appendix) shows the descriptive statistics (correlation matrix) for the explanatory variables.

Table 5 Summary statistics (“determinants of fraud” analyses)

Table 6 shows the results for a difference in means t-test about how the fraud sample differs from non-fraud matched campaigns on our main explanatory variables. Note that fraudsters tend to have fewer backed projects (about five), and create fewer projects (about one). They also have a shorter period between date of account opening on Kickstarter and launch date (three to four months). In line with Hypothesis 1, the univariate comparison provides initial evidence that fraudsters are less likely to have engaged in prior crowdfunding activity.

Table 6 Mean differences between fraud and matched sample (“determinants of fraud” analyses)

In accordance with Hypothesis 2, we find that the number of external links is negatively related to fraud. It seems that external links serve a kind of certification role. Thus, more links imply higher reputational capital that can be lost in the case of a fraudulent campaign. We also find that fraudsters are less present or active on Facebook (66% of non-fraud campaigns link to Facebook, compared to only 50% of fraudulent campaigns). The results remain consistent if we examine personal Facebook accounts and Facebook pages separately.

In terms of campaign characteristics, and in accordance with Hypothesis 3, we find that campaign durations tend to be longer for fraudulent campaigns, with an average difference of 2 days. This small variation may be because Kickstarter generally recommends 30 days or less,Footnote 19 and most projects follow that advice. We note that fraudulent campaigns provide more pledge categories, and their descriptions are easier to read. They can also be interpreted as less sophisticated, because most readability measures correspond to the number of years of formal education needed to understand the text upon first reading. The rationale behind this finding is that fraudsters are either targeting a wider and presumably less educated crowd, or they put less effort into the descriptions. However, we find no differences between fraud and non-fraud campaigns’ use of video pitches. This may be because creators are aware that video pitching is encouraged by platforms and can strongly impact the probability of successful fundraising, as per previous research (e.g., Mollick, 2014).

We turn next to our baseline model, which uses multivariate regressions to evaluate correlations among the three blocks of explanatory variables with fraud—creator(s)’ characteristics/background, social media affinity, and campaign characteristics. Table 7 summarizes our results for the determinants of fraud in Eq. (1). We consider all the main explanatory variables simultaneously. The means of the Variance Inflation Factors (VIFs) range from 1.10 to 1.12. Since they are well below the critical value of 5, there is no indication of multicollinearity (see Kutner et al., 2005).

Table 7 Multivariate analysis of determinants of fraud

Our main analysis is in Specification (1), for which the matched non-fraud campaigns are determined by using a one-to-one PSM nearest-neighbor matching method without replacement. For robustness checks, we also show results with replacement (Specification (2)), and for a one-to-two matching method with and without replacement (Specifications (3) and (4)).

No. of Creator-Backed Projects is negatively correlated with fraud. The coefficient remains stable throughout the specifications, but is only statistically significant in Specification (3). We also find that No. of Creator-Created Projects is negatively related to fraud; the coefficient is statistically significant throughout all specifications. This supports Hypothesis 1, that project creators with higher levels of prior crowdfunding experience are less likely to conduct fraudulent campaigns. It also confirms that backing multiple projects is easier to mimic as a signal for fraudsters than previously created projects.

As shown in Table 7, our main explanatory variables—# External Links and Facebook—in the social media affinity block have a strongly negative relationship with fraud. Therefore, campaigns with either a Facebook page or a personal Facebook account are about 45% (= EXP (− 0.606) − 1) less likely to be fraudulent than their matches (significant at a 5% level—Specification (1)). The number of external links provided on the campaign website (e.g., a YouTube video associated with the campaign, a LinkedIn profile, a start-up’s web page) has a strongly negative correlation with the probability of a campaign being fraudulent. Overall, the results support Hypothesis 2, that fraudsters tend to be less present on social media and provide fewer external links.

Furthermore, in accordance with Hypothesis 3, we find that many campaign characteristics are related to the probability of observing fraudulent behavior. For example, fraudulent campaigns tend to ex ante choose longer funding durations (Hypothesis 3.A). This also comports with the signaling argument that high-quality campaigns choose shorter durations to signal quality and confidence in attaining funded. We find no statistical significance for Min. Pledge Amount (Hypothesis 3.B). This may be because most reward-based crowdfunding campaigns offer small amounts as minimum contributions for non-monetary payoffs, and thus campaigns do not substantially differ on this variable. Our results also show that the number of pledge categories has a significantly positive relationship with fraud. This provides further evidence for Hypothesis 3.C, that crowdfunding fraudsters are more likely to offer a higher number of reward levels.

Finally, Table 7 shows that project descriptions of fraudulent campaigns tend to have lower automated readability indexes (ARI). ARI is an approximate representation of the number of formal years of education needed to comprehend the text on a first reading. A one-level ARI increase from the average score of eleventh grade (U.S. grade level) to twelfth grade decreases the probability of the campaign being in the fraudulent subsample by about 10.5% (= EXP (− 0.116) − 1). This supports Hypothesis 3.D, that fraudsters may target a less educated crowd by using simpler language, or that they do not bother fine-tuning their campaign descriptions. We find no statistically significant effect of Video Pitch on fraud. This may be because more than 93% of our 386 cases used video pitches.

We check the robustness of our “determinants of fraud” results by using alternative proxies or complementary explanatory variables in Table 8. To avoid multicollinearity, or interdependent definitions across variables, we do not estimate models with all variables simultaneously. We examine each variable separately, but retain the main explanatory variables from the other blocks as “controls.” Control 1 (creator(s)’ characteristics/background) includes Creator-backed Projects and Creator-created Projects; Control 2 (social media affinity) includes # External Links and Facebook; Control 3 (campaign characteristics) includes Duration, Min. Pledge Amount, No. of Pledge Categories, ARI, and Video Pitch.

Table 8 Multivariate analysis of determinants of fraud (robustness check)

First, within the creator(s)’ characteristics/background block, we test for a relationship between a formal name profile and a natural person profile and the likelihood of a fraudulent campaign. We find no statistically significant relationship. This is attributable to the fact that, on Kickstarter, for example, project creators must verify their identities through an automated process. This information appears on their profiles (although not necessarily as their “profile names”).Footnote 20 However, similarly to backing and creating crowdfunding campaigns, we find that our non-fraud sample creators have, on average, been members of the platform for longer periods of time.

We also test for the influence of social media connections. To avoid having outliers drive our results, we take the natural logarithm of number of connections, defined as the number of friends of a personal Facebook page associated with the campaign creator(s), plus the total number of likes. Despite finding a negative relationship between Log (FB Connections) and the probability of observing fraud, there is no statistically significant separate impact on fraudulent activity. We note that fraud campaigns may be using fake profiles to increase their numbers of “friends” or “likes” in order to mislead potential backers.

Furthermore, within the campaign description details, we identify a significantly negative relationship between ARI and fraud. That is, the probability that the campaign is in our fraudulent sample is higher when the project description is easier to understand. We further check for robustness by using three alternative measures of text readability. As Table 8, Panel C, shows, the Coleman–Liau index (CL), the Gunning Fog index (GF), and the Flesch–Kincaid Grade level index (FKG) all exhibit significantly negative correlations with fraudulent activity. This further validates our inferences.

In sum, our results remain robust to using alternative proxies for prior crowdfunding activity, social media affinity, and readability indices.

Platform-Wide Consequences of Fraud

Table 9 presents the results of multivariate logistic and OLS regressions for our measures of success from Eq. (2). We test for platform-wide consequences of suspended large, public scam campaigns. In Panels A and B, Specifications (1)–(3) include Kickstarter campaigns with a goal amount of at least USD $100; Specifications (4)–(6) show results for campaigns with a goal amount of at least USD $10,000. We analyze the determinants of success measured by Funded (logistic regression; coefficients are the logs of the odds ratios), Log Pledged (OLS regressions), and Log Backers (OLS regressions). Campaigns affected by suspension announcements are classified with the dummy variable Fraud Period (Panel A) or Post-Fraud (Panel B).Footnote 21

Table 9 Multivariate analysis of platform-wide consequences of fraud

Panel A shows that the coefficient of Fraud Period is negative and highly statistically significant for the entire sample, including all campaigns with a goal amount of more than $100 (see Specifications (1)–(3)). In Panel B, we follow a stricter approach, and compare campaigns that ended within 14 days of the announcement (Post-Fraud = 0) with those begun within 14 days afterward (Post-Fraud = 1). This allows for a more direct comparison while requiring fewer observations. It also substantially reduces the need to control for Daily Activity and the sets of “periodic fixed effects” used in Panel A. This is because the pre- and post-fraud campaigns were launched around the same time, which mitigates any concerns about procyclicality.

Overall, the results in Table 9 provide strong empirical support for Hypothesis 4, that the occurrence of fraudulent campaigns and their visibility to potential backers have far-reaching consequences for the success (success probability, number of backers, and funds raised) of concurrent crowdfunding campaigns that begin around suspension dates. Panel A, Specification (1), shows a 6.38% lower probability of funding for campaigns posted within 14 days before/after one of our fourteen identified Kickstarter campaign suspensions (= EXP (− 0.066) − 1), all else being equal (see the coefficient on Fraud Period). In Specifications (2) and (3), the pledged amounts (number of project backers) also decreased in an economically meaningful way. The predicted pledged amount in Specification (2) (predicted number of backers in Specification (3)) is approximately 9.6% (5.3%) lower for projects posted within the fraud period (see again the coefficient on Fraud Period).

For example, considering the average pledged amount of approximately USD $11,000,Footnote 22 campaigns posted within a fraud period lose about USD $1000 of their pledged amounts. The real effect is greater for raised amounts that are actually transferred, because within-fraud period projects have a lower probability of success (reaching goal amount). In case of failure, the pledged amounts are not transferred to the project creators (“all-or-nothing” mechanism). The coefficient estimates of the control variables also show that Duration, Daily Activity, and Goal Amount (Log Goal) negatively affect the success measures, while higher Waiting Time and being Featured by Kickstarter have a positive effect.

Next, we examine the sensitivity of our results to changes in the definition of the Fraud Period [Post-Fraud] dummy (in the baseline, we consider 14 days around [Pre/Post] the suspension date). We extend the period day-wise to twenty-nine. We then reduce it to 7 days around the suspension date, and repeat the regressions from Table 9, plotting the coefficient for the variable of interest, Fraud Period [Post-Fraud] in Fig. 1, Panel A [Panel B]. We expect to find the most negative coefficients when the platform-wide effects are most severe, i.e., when our sample of affected campaigns are in their first or last weeks of collecting funds. Shortening or extending the observation period from the 14-day definition should result in higher coefficient estimates (i.e., lower absolute values of the Fraud Period and Post-Fraud negative coefficients). This is because an overly short period does not capture the effect in full, while an overly long period dilutes the platform-wide effect. This results graphically in a V-shaped pattern.

Fig. 1
figure1

Sensitivity analysis. Panel A [Panel B] shows the estimated Fraud Period [Post-Fraud] dummy variable regression coefficients in Specifications (1)–(3) of Table 9, Panel A [Table 9, Panel B], using alternative classification schemes for campaigns affected by suspension announcements (Fraud Period and Post-Fraud) for the success measures as dependent variables (Funded (logit), Log Pledged (OLS), and Log Backers (OLS)). N (ranging from 7 to 29 days) determines the number of \(\mp\) [\(+\)] days considered in the Fraud Period [Post-Fraud] dummy variable definition. All calculated coefficients are statistically significant at least at the 5% level

From Fig. 1, Panel A [Panel B], and in line with our reasoning, we observe the strongest effect for the 13 days around the suspension date [13 days pre- and post-suspension announcement]. This fades slowly when we increase or decrease the number of days. The observed form reconciles with the V-shaped pattern. We interpret this as further support for the platform-wide negative consequences after the suspension of campaigns that slipped through Kickstarter’s initial screening, received a certain level of funding, and were canceled last minute.

We find strong evidence for Hypothesis 4, that large public suspensions by Kickstarter (as identified by our filter criteria described above) have noticeably damaging effects on other funding activities. This can potentially hamper entrepreneurship, and negatively affect the economy, employment, and innovation. It also raises interesting policy implications, namely, that platforms’ efforts to mitigate fraud should be focused more strongly on pre-screening mechanisms than on later project suspensions.Footnote 23

Conclusion

This paper is the first to provide an in-depth examination of the factors associated with a higher probability of fraudulent behavior in crowdfunding, and to analyze the short-term consequences of breaches of trust in the market. We provide evidence that legal enforcement by third parties such as the Federal Trade Commission or regional courts is rare. Because the penalties are usually small, focus should be on the pre-screening procedures and the liability of crowdfunding platforms.

We contribute to the literature by providing a practical (albeit not legal) definition of fraud in the crowdfunding market, and by identifying a comprehensive sample of campaigns associated with fraudulent behavior. We document campaign- and creator-related factors that tend to differ between fraudulent campaigns and a sample of non-fraudulent matched campaigns. We posit that these factors could be used by platforms to develop fraud-predicting models and fraud-preventing methods. We also provide the first empirical evidence of the effect of possible breaches of trust in the market on crowdfunding success. We discuss the implications of our findings further next.

For crowdfunding platforms, our evidence shows that not all scams are detected ex ante. The lack of fraud detection might justify regulations requiring platforms to improve their pre-screening procedures. However, screenings can become obsolete as fraudsters adapt and learn new ways to avoid detection. Therefore, and as an alternative way to increase trust in the market, platforms could design mechanisms to hold project creators accountable after successful funding. For example, they could halt campaigns once funding goals are reached and service any unmet demand in the after-market. They could also retain any funds raised in excess of the goal as insurance for backers (see Belavina et al. (2020) for a theoretical discussion of these two options).

For policymakers, we believe regulators are correct in attempting to protect less sophisticated crowd members. Until recently, most crowdfunding laws targeted specific branches—primarily equity crowdfunding. Reward-based crowdfunding was less regulated except in a few jurisdictions, such as Germany (Klöhn et al., 2016). Regulators could require reward-based crowdfunding platforms to implement pre-screening for particular quality requirements, or prohibit large overcontributions. However, since contribution amount is usually tied to platform fees, regulatory intervention may be more helpful. Once dynamically adapting fraud detection models are implemented and mechanisms exist to hold campaign creators accountable, it should become safer to discuss the phenomenon of crowdfunding with old-fashioned securities lawyers without the need for a defibrillator!

For campaign creators, we emphasize the importance of signals of first-party enforcement, as well as project quality, in ensuring backers’ trust and successful funding. We show that incidences of fraud in the market can be damaging to campaigns. Creators can mitigate this risk by reducing information asymmetries and providing difficult to mimic signals of project quality. For crowdfunding backers, the factors we identify can help evaluate project riskiness in terms of the probability of observing misconduct.

Our empirical analysis has some clear limitations. First, not all crowdfunding fraud is detectable. Thus, we may underestimate the true probability of fraud, a challenge for any prediction model. However, we believe that, at least on Kickstarter, it is unlikely that large-scale fraudulent campaigns go undetected. Small-scale fraud should be examined independently, given that its dynamics most likely differ from what we investigate here.

Second, and more importantly, we cannot legally prove the existence of any outright fraud campaigns. Our context does not allow us to empirically test whether creators have misappropriated funds, or developed low-quality products because of poor effort. We also cannot determine whether a judge would consider the “fraudulent” creators in our sample as simply incompetent. As a result, we use the words “fraud,” “misconduct,” and “fraudulent behavior” interchangeably throughout this study. We have tried to be as strict as possible about defining our criteria for including a campaign in the fraud sample.

Our study opens avenues for future research on fraud detection models for reward-based crowdfunding, as well as other forms (e.g., equity crowdfunding). In unreported tests, we examined whether concurrent projects in the same category where fraud occurred experienced more severe consequences. Our results revealed no evidence of statistically significant differences across categories. This may suggest that the borders between categories are somewhat “blurred” in a crowdfunding context (compared to, e.g., publicly listed firms). Also, backers do not seem to differentiate between categories in response to visible suspensions. However, future research could explore backers’ reactions to fraud (or other shocks).

We posit that, once equity crowdfunding emerges more fully in the U.S., we will observe different twists in fraud. This is because the campaigns are more complex, involve higher investment amounts, and usually comprise an entire venture. We expect the nature of fraud to adapt as well, and to require more sophisticated detection mechanisms. Note that, under a reward-based model, fraud generally occurs because founders do not develop the promised products or misuse funds. Under equity crowdfunding, founders may engage in a whole realm of unethical or illegal activities, such as running several start-ups at a time, violating fiduciary duties, or engaging in asset substitution and risk shifting. These can be more challenging to detect. But we believe our predictions will offer interesting avenues for empirical research as the market develops.

Notes

  1. 1.

    See http://money.cnn.com/2013/06/17/technology/kickstarter-scam-kobe-jerky

  2. 2.

    See http://business.financialpost.com/fp-comment/extraordinary-popular-delusions-and-the-madness-of-crowdfunding

  3. 3.

    In Part A of Online Appendix, we provide a discussion on legal sanctions in crowdfunding market.

  4. 4.

    See http://www.crowdfundinsider.com/2014/03/34255-crowdfunding-fraud-big-threat.

  5. 5.

    See Kickstarter’s guidelines for a definition of credible communications in the case of a failed project: https://www.kickstarter.com/fulfillment.

  6. 6.

    Our observation period for identifying suspected fraudulent campaigns spans 2010 through 2015, and we classified the campaigns in April 2016. We re-checked all suspected fraud campaigns on December 31, 2018, and excluded those where rewards had ultimately been delivered, a reason for late delivery or failure was provided, or backers were at least partially refunded.

  7. 7.

    This resulted in a further four exclusions from our base media reports fraud sample.

  8. 8.

    See http://www.theverge.com/2013/11/8/5081806/kickstarter-alleged-chargeback-fraud-hits-over-100-campaigns.

  9. 9.

    We use 2010–2015 as the sample period in order to ensure sufficient time (until 2018) to identify “suspected fraud” campaigns, especially in cases where rewards were not delivered.

  10. 10.

    In unreported tests, we examined the differences in means across all independent variables used in “determinant of fraud” analyses between fraudulent campaigns identified via Kickscammed vs. those identified via News Search. The results revealed no substantial differences in means.

  11. 11.

    The chronological sequence of the initiation date, campaign categories, and raised volumes in USD of fraudulent campaigns are in Panel C of Table 2. Fraud campaigns are most common in the “Technology” category (56 cases), where they have also raised the largest amounts (more than $11 million). Fraud campaigns by country for each respective year are shown in Panel D. In our sample, fraud cases occurred most frequently in the U.S. (171 cases); the U.K. (8 cases); and Canada (7 cases); followed by Israel (2); and Australia, China, Germany, Hong Kong, and Spain (1 each).

  12. 12.

    In an unreported table, we checked the quality of the PSM algorithm for our main analysis by using logit estimates for the probability of a campaign being fraudulent. We find that all variables (Goal Amount, Country, Year, and Category) included in the PSM are well balanced between fraud and non-fraud campaigns, and thus there are no statistically significant differences between them. Consequently, our results are not driven by any differences in these variables.

  13. 13.

    See https://help.kickstarter.com/hc/en-us/articles/115005139813-Why-would-a-project-be-suspended.

  14. 14.

    In order to ensure sufficient time for the potential suspension to have affected campaigns posted around the same time, we set this date as three months before our last funded/failed campaign has ended (i.e., December 31, 2018). Note that maximum campaign length is ninety days.

  15. 15.

    In order to show amounts in local currencies, Kickstarter converts non-USD currencies to USD using a static USD rate.

  16. 16.

    Note further that the thresholds we use for the four applied criteria did not have a strong effect on the fourteen identified cases. Relaxing the thresholds within certain margins would still result in the same fourteen suspended campaigns. For example, changing the first criterion to “at least 50% of campaign duration has passed” and the second to “campaign was suspended within 2 weeks of its scheduled deadline,” while retaining the same visibility criteria, results in the same fourteen cases.

  17. 17.

    We exclude “canceled” or “suspended” projects from our main sample because their success/failure do not depend on backers’ decisions.

  18. 18.

    For example, if Kickstarter suspends a campaign on March 15, 2015, the “Fraud Period” dummy equals 1 for all campaigns (either funded or failed) launched between March 1, 2015, and March 29, 2015. Our logic remains the same for any overlap between two suspension dates. For example, if suspension 1 is on March 15, 2015, and suspension 2 is on March 25, 2015, the “Fraud Period” equals 1 for all campaigns launched between March 1, 2015, and April 8, 2015.

  19. 19.

    See: https://www.kickstarter.com/help/faq/creator+questions.

  20. 20.

    All project creators on Kickstarter are required to provide official identification documentation. Each project is attributed to at least one natural person, and the name is publicly available on the campaign web page. The creator’s profile name can be the formal name or a fantasy name, but their information (first and family name) is readily available by clicking on the profile.

  21. 21.

    Panel A includes main category, year, month of year (January–December), day of month (first day to last day of respective month), and day of week (Monday–Sunday) fixed effects. Moreover, in Panel A, we control for a proxy for platform activity by calculating the average number of daily “live” campaigns during a project’s lifetime (Daily Activity). Panel B includes main category and year fixed effects. The time fixed effects are based on campaign launch dates.

  22. 22.

    Note that the average pledged amount/number of backers reported in Table 2, Panel C, is the average of the log-transformed variables.

  23. 23.

    In Part B of Online Appendix, we provide further robustness checks for our results.

References

  1. Agrawal, A., Catalini, C., & Goldfarb, A. (2015). Crowdfunding: Geography, social networks, and the timing of investment decisions. Journal of Economics and Management Strategy, 24(2), 253–274.

    Article  Google Scholar 

  2. Aggarwal, R., Faccio, M., Guedhami, O., & Kwok, C. (2016). Culture and finance: An introduction. Journal of Corporate Finance, 100(41), 466–474.

    Article  Google Scholar 

  3. Ahlers, G. K., Cumming, D. J., Guenther, C., & Schweizer, D. (2015). Signaling in equity crowdfunding. Entrepreneurship Theory and Practice, 39(4), 955–980.

    Article  Google Scholar 

  4. Allen, F., Gu, X., & Jagtiani, J. (2021). A survey of fintech research and policy discussion. Review of Corporate Finance, 1(3–4), 259–339.

    Article  Google Scholar 

  5. André, K., Bureau, S., Gautier, A., & Rubel, O. (2017). Beyond the opposition between altruism and self-interest: Reciprocal giving in reward-based crowdfunding. Journal of Business Ethics, 146(2), 313–332.

    Article  Google Scholar 

  6. Assenova, V., Best, J., Cagney, M., Ellenoff, D., Karas, K., Moon, J., Neiss, S., Suber, R., & Sorenson, O. (2016). The present and future of crowdfunding. California Management Review, 58(2), 125–135.

    Article  Google Scholar 

  7. Attig, N., Boubakri, N., El Ghoul, S., & Guedhami, O. (2016). Firm internationalization and corporate social responsibility. Journal of Business Ethics, 134(2), 171–197.

    Article  Google Scholar 

  8. Belavina, E., Marinesi, S., & Tsoukalas, G. (2020). Rethinking crowdfunding platform design: Mechanisms to deter misconduct and improve efficiency. Management Science, 66(11), 4980–4997.

    Article  Google Scholar 

  9. Belleflamme, P., Lambert, T., & Schwienbacher, A. (2013). Individual crowdfunding practices. Venture Capital: An International Journal of Entrepreneurial Finance, 15(4), 313–333.

    Article  Google Scholar 

  10. Belleflamme, P., Lambert, T., & Schwienbacher, A. (2014). Crowdfunding: Tapping the right crowd. Journal of Business Venturing, 29(5), 585–609.

    Article  Google Scholar 

  11. Berns, J. P., Figueroa-Armijos, M., da Motta Veiga, S. P., & Dunne, T. C. (2020). Dynamics of lending-based prosocial crowdfunding: Using a social responsibility lens. Journal of Business Ethics, 161(1), 169–185.

    Article  Google Scholar 

  12. Bertoni, F., Colombo, M. G., & Grilli, L. (2011). Venture capital financing and the growth of high-tech start-ups: Disentangling treatment from selection effects. Research Policy, 40(7), 1028–1043.

    Article  Google Scholar 

  13. Bradford, S. (2012). Crowdfunding and the federal securities laws. Columbia Business Law Review, 1, 1–150.

    Google Scholar 

  14. Brockman, P., El Ghoul, S., Guedhami, O., & Zheng, Y. (2020). Does social trust affect international contracting? Evidence from foreign bond covenants. Journal of International Business Studies, 51(5), 1–34.

    Google Scholar 

  15. Chen, L. (2021). Investigating the impact of competition and incentive design on performance of crowdfunding projects: A case of independent movies. Journal of Theoretical and Applied Electronic Commerce Research, 16(4), 791–810.

    Article  Google Scholar 

  16. Coakley, J., & Lazos, A. (2021). New developments in equity crowdfunding: A review. Review of Corporate Finance, 1(3–4), 341–405.

    Article  Google Scholar 

  17. Colombo, M. G., Franzoni, C., & Rossi-Lamastra, C. (2015). Internal social capital and the attraction of early contributions in crowdfunding projects. Entrepreneurship Theory and Practice, 39(1), 75–100.

    Article  Google Scholar 

  18. Cumming, D. J., Leboeuf, G., & Schwienbacher, A. (2019a). Crowdfunding models: Keep-it-all vs. all-or-nothing. Financial Management, 49(2), 331–360.

    Article  Google Scholar 

  19. Cumming, D. J., Meoli, M., & Vismara, S. (2019b). Does equity crowdfunding democratize entrepreneurial finance? Small Business Economics, 56, 533–552.

    Article  Google Scholar 

  20. Davidson, W. N., III., & Worrel, D. L. (1988). The impact of announcements of corporate illegalities on shareholder returns. Academy of Management Journal, 31(1), 195–200.

    Google Scholar 

  21. Defazio, D., Franzoni, C., & Rossi-Lamastra, C. (2020). How pro-social framing affects the success of crowdfunding projects: The role of emphasis and information crowdedness. Journal of Business Ethics, 171, 357–378.

    Article  Google Scholar 

  22. Diamond, D. (1989). Reputation acquisition in debt markets. Journal of Political Economy, 97(4), 828–862.

    Article  Google Scholar 

  23. Dupont, Q., & Karpoff, J. M. (2019). The trust triangle: Laws, reputation, and culture in Empirical finance research. Journal of Business Ethics, 163, 217–238.

    Article  Google Scholar 

  24. El Ghoul, S., Guedhami, O., Kwok, C., & Shao, L. (2016). National culture and profit reinvestment: Evidence from SMEs. Financial Management, 45(1), 37–65.

    Article  Google Scholar 

  25. El Ghoul, S., Guedhami, O., Nash, R., & Patel, A. (2019). New evidence on the role of the media in corporate social responsibility. Journal of Business Ethics, 154(4), 1051–1079.

    Article  Google Scholar 

  26. Fischel, D. R. (1982). Use of modern finance theory in securities fraud cases involving actively traded securities. Business Lawyer, 38(1), 1–20.

    Google Scholar 

  27. Gino, F., Ayal, S., & Ariely, D. (2009). Contagion and differentiation in unethical behavior: The effect of one bad apple on the barrel. Psychological Science, 20(3), 393–398.

    Article  Google Scholar 

  28. Grilli, L., & Murtinu, S. (2014). Government, venture capital and the growth of European high-tech entrepreneurial firms. Research Policy, 43(9), 1523–1543.

    Article  Google Scholar 

  29. Hain, D., Johan, S., & Wang, D. (2016). Determinants of cross-border venture capital investments in emerging and developed economies: The effects of relational and institutional trust. Journal of Business Ethics, 138(4), 743–764.

    Article  Google Scholar 

  30. Hainz, C. (2018). Fraudulent behavior by entrepreneurs and borrowers. In D. J. Cumming & L. Hornuf (Eds.), The economics of crowdfunding (pp. 79–99). Palgrave Macmillan.

    Chapter  Google Scholar 

  31. Hornuf, L., & Schwienbacher, A. (2017). Should securities regulation promote equity crowdfunding? Small Business Economics, 49(3), 579–593.

    Article  Google Scholar 

  32. Hornuf, L., & Schwienbacher, A. (2018). Market mechanisms and funding dynamics in equity crowdfunding. Journal of Corporate Finance, 50, 556–574.

    Article  Google Scholar 

  33. Hornuf, L., Schmitt, M., & Stenzhorn, E. (2018). Equity crowdfunding in Germany and the UK: Follow-up funding and firm failure. Corporate Governance: An International Review, 26, 331–354.

    Article  Google Scholar 

  34. Jiang, T. (2013). Cheating in mind games: The subtlety of rules matters. Journal of Economic Behaviour and Organization, 93, 328–336.

    Article  Google Scholar 

  35. Karpoff, J. M., Lee, D. S., & Martin, G. S. (2008). The consequences to managers for cooking the books. Journal of Financial Economics, 88(88), 193–215.

    Article  Google Scholar 

  36. King, G., & Zeng, L. (2001a). Logistic regression in rare events data. Political Analysis, 9(2), 137–163.

    Article  Google Scholar 

  37. King, G., & Zeng, L. (2001b). Improving forecasts of state failure. World Politics, 53(4), 623–658.

    Article  Google Scholar 

  38. Klöhn, L., Hornuf, L., & Schilling, T. (2016). The regulation of crowdfunding in the German small investor protection act: Content, consequences, critique, suggestions. European Company Law, 13, 56–66.

    Google Scholar 

  39. Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied linear statistical models (5th ed.). McGraw-Hill.

    Google Scholar 

  40. Lee, N., Sameen, H., & Cowling, M. (2015). Access to finance for innovative SMEs since the financial crisis. Research Policy, 44(2), 370–380.

    Article  Google Scholar 

  41. Liang, T. P., Wu, S. P. J., & Huang, C. C. (2019). Why funders invest in crowdfunding projects: Role of trust from the dual-process perspective. Information & Management, 56(1), 70–84.

    Article  Google Scholar 

  42. Lin, M., Prabhala, N. R., & Viswanathan, S. (2013). Judging borrowers by the company they keep: Friendship networks and information asymmetry in online peer-to-peer lending. Management Science, 59(1), 17–35.

    Article  Google Scholar 

  43. Mann, H., Garcia-Rada, X., Hornuf, L., Tafurt, J., & Ariely, D. (2016). Cut from the same cloth: Similarly dishonest individuals across countries. Journal of Cross-Cultural Psychology, 47(6), 858–874.

    Article  Google Scholar 

  44. Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self- concept maintenance. Journal of Marketing Research, 45(6), 633–644.

    Article  Google Scholar 

  45. Mollick, E. (2014). The dynamics of crowdfunding: An exploratory study. Journal of Business Venturing, 29(1), 1–16.

    Article  Google Scholar 

  46. Mollick, E., & Nanda, R. (2015). Wisdom or madness? Comparing crowds with expert evaluation in funding the arts. Management Science, 62(6), 1533–1553.

    Article  Google Scholar 

  47. Parhankangas, A., & Renko, M. (2017). Linguistic style and crowdfunding success among social and commercial entrepreneurs. Journal of Business Venturing, 32(2), 215–236.

    Article  Google Scholar 

  48. Perez, B., Machado, S. R., Andrews, J., & Kourtellis, N. (2020). I Call BS: Fraud detection in crowdfunding campaigns. arXiv preprint. arXiv:2006.16849.

  49. Perino, M. A. (1998). Fraud and federalism: Preempting private state securities fraud causes of action. Stanford Law Review, 50, 273–338.

    Article  Google Scholar 

  50. Rezaee, Z. (2005). Causes, consequences, and deterrence of financial statement fraud. Critical Perspectives on Accounting, 16(3), 277–298.

    Article  Google Scholar 

  51. Rossi, A., Vanacker, T., & Vismara, S. (2021). Equity crowdfunding: New evidence from US and UK markets. Review of Corporate Finance, 1(3–4), 407–453.

    Article  Google Scholar 

  52. Shailer, G. (1999). Classificatory loan pricing as an incentive for signalling by closely held firms. New England Journal of Entrepreneurship, 2(1), 1–6.

    Google Scholar 

  53. Shalvi, S., Gino, F., Barkan, R., & Ayal, S. (2015). Self-serving justifications doing wrong and feeling moral. Current Directions in Psychological Science, 24(2), 125–130.

    Article  Google Scholar 

  54. Siering, M., Koch, J. A., & Deokar, A. V. (2016). Detecting fraudulent behavior on crowdfunding platforms: The role of linguistic and content-based cues in static and dynamic contexts. Journal of Management Information Systems, 33(2), 421–455.

    Article  Google Scholar 

  55. Signori, A., & Vismara, S. (2018). Does success bring success? The post-offering lives of equity-crowdfunded firms. Journal of Corporate Finance, 50, 575–591.

    Article  Google Scholar 

  56. Simmonds, A. R., Sagat, K. A., & Ronen, J. (1992). Dealing with anomalies, confusion and contradiction in fraud on the market securities class actions. Kentucky Law Journal, 81, 123–186.

    Google Scholar 

  57. Sorenson, O., Assenova, V., Li, G. C., Boada, J., & Fleming, L. (2016). Expand innovation finance via crowdfunding: Crowdfunding attracts venture capital to new regions. Science, 354(6319), 1526–1528.

    Article  Google Scholar 

  58. Spence, M. (1973). Job market signaling. Quarterly Journal of Economics, 87(3), 355–374.

    Article  Google Scholar 

  59. Thompson, S. B. (2011). Simple formulas for standard errors that cluster by both firm and time. Journal of Financial Economics, 99(1), 1–10.

    Article  Google Scholar 

  60. Vismara, S. (2016). Equity retention and social network theory in equity crowdfunding. Small Business Economics, 46(4), 579–590.

    Article  Google Scholar 

  61. Zott, C., & Huy, Q. N. (2007). How entrepreneurs use symbolic management to acquire resources. Administrative Science Quarterly, 52(1), 70–105.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Professor Greg Shailer (field editor) and two anonymous reviewers for many helpful comment and suggestions. We also thank Eliot Abrams, Ali Akyol, Yan Alperovych, Fabio Bertoni, Harjeet S. Bhabra, Martin Boyer, Steven Bradford, Shantanu Dutta, Philipp Geiler, Alexander Groh, Sofia Johan, Jonathan M. Karpoff, Yuri Khoroshilov, Maher Kooli, Iwan Meier, Fabio Moneta, Miwako Nitani, Juliane Proelss, Anita Quas, Rahul Ravi, Armin Schwienbacher, Silvio Vismara, Thomas Walker, Haoyong Zhou, and Tingyu Zhou, as well as seminar participants at the John Molson School of Business, Telfer School of Management, and HEC Montreal for many helpful comments. We are also grateful for the comments and suggestions gathered during the following conferences: Entrepreneurship, the Internet, and Fraud: Managerial and Policy Implications (Montreal, Canada); Munich Summer Institute 2017 (Munich, Germany); Corporate Governance Implications of New Methods of Entrepreneurial Firm Formation Workshop (Bergamo, Italy); 2nd Entrepreneurial Finance Conference (Ghent, Belgium); 32nd Annual Congress of the European Economic Association (Lisbon, Portugal); Financial Management Association Annual Meeting 2017 (Boston, USA); Crowdfunding Workshop at EMLYON Business School’s Research Centre for Entrepreneurial Finance (Lyon, France); Annual Meeting of the Verein für Socialpolitik 2017 (Vienna, Austria); 36th International Conference of the French Finance Association (Quebec City, Canada); and the 5th Crowdinvesting Symposium (Berlin, Germany). Denis Schweizer gratefully acknowledges the financial support provided through the Manulife Professorship. Lars Hornuf gratefully acknowledges the financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft) under the Grant Number HO 5296/1-1. This project has been a part of PhD thesis of Moein Karami, and has been supported by the Social Sciences & Humanities Research Council of Canada under the grant number 435-2015-1495.

Funding

Open Access funding enabled and organized by Projekt DEAL.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Lars Hornuf.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 69 kb)

Appendix

Appendix

See Table 10.

Table 10 Correlation matrix

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cumming, D., Hornuf, L., Karami, M. et al. Disentangling Crowdfunding from Fraudfunding. J Bus Ethics (2021). https://doi.org/10.1007/s10551-021-04942-w

Download citation

Keywords

  • Crowdfunding
  • Entrepreneurial finance
  • Fraud
  • Internet