Keywords

4.1 The Paradox of the Development Engineer

In any problem-solving endeavor, identifying the right problem and asking the right questions is at least half the challenge. A well-posed problem can suggest an obvious, effective solution, while a poorly chosen problem can lead to dead-end non-solutions that leave no one better off. In this chapter, we consider important questions that should be asked with respect to potential beneficiaries or collaborators, the larger context of a problem, the type of impact, approaches to scale, and ethical considerations.

To begin with, it is worth recognizing that very few people involved with development engineering are intentionally trying to do the wrong thing. If anything, it is the opposite – we are involved because we hope to improve the world in some way. But, international development and development engineering have plenty of critics, too (Sainath, 1996; Collier, 2007; Easterly, 2007; Dambisa, 2009; Morozov, 2011; Toyama, 2015). After three quarters of a century and over $2 trillion dollars of aid, few low-income countries have crossed the threshold into middle income (Collier, 2007; Dambisa, 2009). Despite decades of innovation with cook stoves, cold chains, and communication technology, many developing world communities are not healthier, more empowered, or more informed as a result (Easterly, 2007; Morozov, 2011; Toyama, 2015). Whether we agree with the critics or not (see, e.g., Kenny, 2012; Sachs, 2005), there can be value in engaging with their critique that technological innovation has not always delivered on its socioeconomic promises.

If we accept this reality – if not as universal fact, then as possibility – then we can move on to more productive questions: Why is it that, despite our consciously positive intentions, development engineering often fails to meet its objectives? And, what can we do to increase our chances of success? Since later chapters are devoted to the second question, we focus in this chapter on the first.

If our conscious intentions are positive, but the outcomes fall short, there are only four possibilities:

  1. (1)

    We have counterproductive unconscious intentions.

  2. (2)

    Our approach is flawed.

  3. (3)

    There was bad luck.

  4. (4)

    The problem admits no solution, at least not one that we, specifically, are able to offer.

(Give it some thought – there really are no other possibilities.) Taken together, these possibilities should instill in us a deep humility: Either we are at fault (possibilities 1 and 2), or the circumstances are beyond us (3 and 4).

And, that brings us to the development engineer’s paradox. All engineering, indeed, all problem-solving, requires a certain confidence, even arrogance; we must believe that we can improve on all prior attempts. Yet, development engineering also requires profound humility. Humility in the face of difficult odds. Humility because of our own deficiencies of skill, knowledge, or character. Humility in the presence of communities, contexts, and constraints that are often not our own.

How can the development engineer’s paradox be mitigated? Confidence without humility leads to overconfidence and arrogance, which in turn closes off our senses to the full range of information that could be available in any creative endeavor: What do potential beneficiaries think? How do our collaborators feel? Are we testing a proposed solution in all of the ways it should be tested? Yet, humility without confidence leads to paralysis and disengagement. The only viable path is one in which we use our confidence as faith in our eventual ability to reach a solution, while we apply our humility to temper every step along the way. We have to believe a solution exists, while doubting – and therefore intensively seeking confirmation for – each element of any proposed solutions. (The converse is a disaster – we would doubt the possibility of a solution, while being blithely confident in whatever options occur to us.)

4.2 Beneficiaries and Aspirations

So, humility is essential throughout any development engineering project, but it is perhaps most important when it concerns our understanding of potential beneficiaries and what is good for them. In fact, the very phrases “beneficiaries” and “what is good for them” trigger unease among experienced development practitioners. Of course, anyone who engages in development is seeking to do work that is good for someone else, so we cannot fault the attempt. The problem lies, rather, with the presumption that we know “what is good for them,” and even more fundamentally, with the presumption that we could ever actually know.

Indeed, the most common, most accurate criticism leveled at international development may be that of misguided paternalism: When someone who presumed to know what was good for another person, another community, or another nation, turned out to be wrong. The history of development is full of such examples, large and small. For decades, Western nations pushed “Washington Consensus” policies that encouraged low-income countries to lower trade barriers in the name of economic growth (Williamson, 2011); with hindsight, we see that countries that complied with the consensus opened themselves up to exploitation, while those that strategically defied it, like China, protected their homegrown businesses and thrived (Serra & Stiglitz, 2008).

Development engineering has its share of such failures, as well: The Play Pump (Costello, 2010), the Soccket (Kenny & Sandefeur, 2013), One Laptop per Child (Villanueva-Mansilla & Olivera, 2012; Beuermann et al., 2015), and many other technologies have failed to deliver on grand promises, even as rich-world proponents pushed them into low-income communities. There are also homegrown efforts, like the Computador Popular and the Simputer (Fonseca & Pal, 2006), which have disappointed.

The obvious way to avoid presumption is to engage deeply and continuously with potential beneficiaries – to understand what they want, what they are constrained by, what resources they have, what strengths they can build on, and what dreams they have for the future. Development engineers and, sometimes, their critics have therefore refined a host of approaches to the design of solutions informed by beneficiaries: participatory design (Schuler & Namioka, 1993), cooperative design (Bodker & King, 2018), co-design (David et al., 2013; Ramachandran et al., 2007), participatory action research (Kemmis, 2006; Kemmis & Wilkinson, 1998), community-based participatory design (Braa, 1996), ethnographic design (Blomberg et al., 2009), human-centered design (Putnam et al., 2016), user-centered design (Putnam et al., 2009), asset-based community development (Kretzmann & McKnight, 1996; Mathie & Cunningham, 2003), and so on. What underlies all such approaches is a respect for potential beneficiaries as people who deeply understand the problem context, who have their own creative talents, and whose buy-in is required for uptake, impact, and sustainability of the solution. Partly for this reason, many practitioners of these methodologies prefer to refer to beneficiaries as “partners” or “collaborators.”

In earlier chapters, we briefly reviewed participatory and human-centered design approaches, but here we mention a mindset that we have found useful in our own work and which we find counters some of the worst pathologies of “traditional” engineering: Instead of focusing solely on solving material needs, we should look for ways to align solutions with beneficiary aspirations.

We define an aspiration as “a desire that is persistent and aiming for something higher” (Toyama, 2018). It is useful to contrast aspirations with needs, the latter a common focus of both engineering and international development, in which the goal is to understand and address human needs through “needs assessments.” While needs are often defined in relation to negative experiences – such as pain, hunger, illness, or poverty – aspirations are optimistic and forward-looking. Needs are also highly volatile, intensely felt but tending to vanish upon being met; in contrast, aspirations sustain over the longer term. Thus, when projects connect to beneficiary aspirations, beneficiaries are more likely to engage productively and for the longer term.

Among other things, being guided by beneficiary aspirations helps avoid the presumptions of paternalism. For example, in some very low-income communities, parents prioritize sons over daughters for food and education; they are betting on their boy children for future income – unfortunate but sensible in patriarchal societies. Of course, the girls need nutrition and school as much as the boys, but few parents feel the tug of that need. Outside efforts to reprimand parents or to otherwise push schooling rarely stick (Herz et al., 2004). In one Indian context, however, it was found that the example of just a handful of women per village being recruited to high-paying white-collar jobs caused improved schooling and nutrition outcomes for other local girls (Jensen, 2012). In other words, when parents saw that girls could also meet family aspirations for income security, they invested more in their daughters.

Another example is encouraging the use of toilets. Some communities have a cultural aversion to using toilets even when outsiders bother to build them in their neighborhoods. The sanitary need is not felt. (Or perhaps, the modern ideal of cloistering in a tiny room with one’s own waste is not as attractive compared with going in the big outdoors!) In Haryana, India, however, great improvements in toilet building were seen in response to the “no toilet, no bride” campaign, in which women were encouraged not to marry men who did not have an indoor toilet (Stopnitzky, 2017). Celebrities were recruited to endorse the idea, thereby aligning aspirations for marriage and middle-class lifestyles with sanitation.

Finally, there is the option of addressing needs and aspirations simultaneously through unconditional cash transfers, in which households are given cash to spend as they see fit. It turns out that for a range of contexts, families – free of externally imposed donor preferences or judgements – not only relieve short-term needs but apply the funds toward longer-term aspirations (Weidel, 2016).

4.3 Framing the “Problem”

So far we have discussed the bottom-up approach to discovering development challenges, by observing people’s daily lives and understanding their aspirations. This clearly plays an important role in defining new questions and opportunities in development engineering. Yet to holistically characterize a development challenge, researchers must incorporate “top-down” insights as well, by exploring the market, institutional, and social failures that create and reinforce poverty. These are often complex, high-level challenges that communities deal with every day – from the high costs of moving rural goods to urban markets to the hassles of accessing healthcare from unreliable public clinics. Yet the forces underlying these failures are not always evident to observers on the ground.

Nevertheless, their dynamics are first-order in defining the problem to be solved and in finding a solution that scales. Fortunately, there are existing resources that can help us understand the behavior of markets and institutions in many low-income settings. These include national statistics and international surveys, as well as quantitative academic research published in social science journals. When we combine on-the-ground observations and anecdotes with “top-down” models and insights, we can more thoroughly characterize the complex problems facing communities.

Of course there is always the risk that we become too reductionist in our definition of the problem. In trying to simplify a complex challenge, we may over-rationalize, failing to notice the details that really matter (Scott, 2020). We may design a solution so artificial that it spectacularly fails, resulting in great calamity for the “beneficiaries” (Duflo, 2017). As it turns out, most development problems are complicated, multifaceted, and interconnected. They can be framed through multiple lenses. For example, in Chap. 16, Tarpeh and co-authors discuss the challenge of urban sanitation in East Africa. It can be framed as a public health problem, a food production puzzle, an issue of failed governance, or a matter of planetary boundaries (Rockström et al., 2009). How you decide to frame the problem is often a moral, political, or personal choice that cannot be reduced to simple cost-benefit analysis. You may think of urban sanitation as an opportunity to capture economic value from waste, or you may see it as part of a global strategy to reduce human reliance on synthetic fertilizers. It is equally valid to view it as a public health emergency, through the lens of health as a human right. Asking the “right” question often depends on your personal values and motivations.

This, in turn, requires us to acknowledge our own identities and the privileges we benefit from. Privilege is a sort of unearned power, and it often accrues to people with higher education, many of whom will be engaged in research at some point in their careers (Adhikari et al., 2018). If you are reading this book, you likely are endowed with a privilege that creates blind spots. Privilege can make it difficult for you to directly observe or experience a community’s preferences, because there is a cultural distance that will always mark you as separate. You may get less direct feedback, your ideas may be challenged or vetted less thoroughly, and your casual requests may be prioritized over others’ substantive needs.

There are productive ways to navigate and mitigate these risks. One of the most effective is to work with partners who have fewer blind spots than you. A useful example is the model used by Digital Green, an NGO that advises small-scale farmers on best practices using a combination of digital technology and grassroots partnerships. The core insight of Digital Green is that smallholder farmers are themselves great experimenters and teachers (Gandhi et al., 2016). The platform enables farmers to teach each other about agricultural practices that boost income, through local production and dissemination of video content. This model taps into the credibility that local farmers carry within their communities; it also helps to overcome the blind spots of the engineers working on Digital Green’s backend infrastructure.

Other strategies include working for or alongside community organizations before engaging in research (which can improve the richness and relevance of your research questions) and proactively discussing and addressing power imbalances in research relationships. Researchers from elite, wealthy institutions often benefit from greater financial resources and access to government leaders; this comes with complex ethical obligations. Researchers in less resourced institutions may have unique knowledge, perspectives, and community relationships (Naritomi et al., 2020); at the same time, they often face difficult-to-navigate community expectations. Each of these contributions (and accompanying constraints) must be materially recognized, even when they are intangible or difficult to quantify.

If you are a researcher with privilege, it is useful to actively solicit feedback, invite questions about your own contributions to a project, and look for opportunities to understand and invest in your colleagues. There are also a range of professional resources emerging over the last few years: the African Academy of Sciences, Mwazo Institute, and EASST Collaborative are all developing models for cross-national research collaboration (Hoy, 2018; Naritomi et al., 2020), including tools to more equitably control decisions about funding, co-authorship, and study design.

In summary, to define a problem well it is valuable to spend time on the ground, connecting with the communities you seek to empower and participating in their lived experiences. It is enriching to read deeply about the history, politics, and markets of the countries and communities you work in (Adhikari et al., 2018) and to form mutually respectful partnerships with local organizations (Tindana et al., 2007). These experiences will shape your own framing of problems, even if unconsciously. They may help you embrace complexity, avoiding the constraints of the “rational.”

4.4 Conducting Ethical Research

Once a problem has been identified, how does one proceed ethically and responsibly with research? What does “ethical research” mean for a development engineer? The reality is that there is no consensus definition of ethics in research. More than anything, it is a framework for examining trade-offs in the decisions we make. First, we must recognize that engineers are interventionists: Just like economists and clinicians, engineers intervene on people’s lives. Development engineers, in particular, seek to solve human problems; this work necessarily involves interaction with people and communities.

Here we will not address whether it is ethical for outsiders to conduct research in low-resource countries; much has been written on this, particularly in the domain of global health (Tindana et al., 2007). We simply note that there are compelling opportunities to contribute in the sphere of economic and human development; and while we will face ethical challenges whenever we intervene, we can engage with humility.

This will require us to ask difficult questions, like:

  • Do the benefits to participants in research outweigh potential harms? How are benefits and risks being defined: by outsiders or by the participating community itself?

  • Will the benefits of research be distributed fairly to all parties, in accordance with their risks and contributions?

  • Will beneficial downstream products of research (including generalizable knowledge) be made accessible to people after the study is completed? What promises are being made, either explicitly or implicitly, in this regard?

  • Have human subjects provided locally meaningful and substantive “informed consent” for their participation in research?

Asking questions like these can help minimize the risks of exploitation (El Setouhy et al., 2004). But there are several other practical considerations when it comes to responsible research. When conducting research in less developed environments – where legal and regulatory protections may be limited, opaque, or altogether absent – the responsibility for oversight is often shifted to the researcher (Alper & Sloan, 2014). For example, in the absence of a strong environmental regulator, you may need to develop your own checklists and protocols for monitoring and mitigating the potential environmental impacts of your study. You may need to build the financial management practices of local partners, so that they can legally accept research funding from external donors. You may even need to help establish a local institutional review board (IRB). In all cases, it is essential to involve (and compensate) local researchers, to ensure that your practices and protocols are appropriate and adapted to local norms.

Research Involving People

Any research that involves human participants – for example in interviews, surveys, or usability testing – must obtain prior approval from an IRB or Research Ethics Committee (REC). In some cases, an expedited review may be possible, particularly if the intervention poses little risk of harm to humans. Indeed some IRBs are beginning to innovate, recognizing that the evaluation of social interventions generally poses less risk of harm than the testing of novel clinical treatments (Schopper et al., 2015). Still, the process of completing a submission to an IRB is a valuable one and pushes the researcher to systematically evaluate whether a project’s benefits to the community will actually outweigh its risks.

Engineers operating in areas of medicine and public health are advised to follow practices developed and adopted by the global health community, which draws from a rich body of scholarly work in international bioethics (Pinto et al.). For development engineers operating outside health, an excellent resource on running responsible research in low-income countries is the Oxford Handbook of Professional Economic Ethics (2016). A chapter by Glennerster and Powers focuses on the ethics of randomized evaluations; another chapter by Alderman, Das and Rao takes a broader view of field work, including issues related to trust, transparency, and privacy.

Finally it is important to communicate ethically with partners and subjects about your research interests and motivations. Creating hope around a promising innovation – one that will not necessarily continue after the study ends – can be harmful to communities. In many developing communities, outsiders are seen as wealthy power brokers, and locals may be eager to comply and please (even something as simple as serving tea and milk to researchers, with milk they would otherwise give to their kids). Are we being sensitive to these power dynamics?

Taking a Broader View

Researchers often implement their studies in cooperation with relatively nimble, efficient nongovernmental organizations. What is the obligation, then, to connect with local government staff as you proceed? There may be little incentive to engage with bureaucracies. Yet the eventual scale-up of any “proven” innovation will likely require some government input – whether for the actual scaling of the intervention (e.g., in the case of a novel public service) or in the form of approvals for the authority to operate at scale.

In general, researchers also find it valuable to engage local government agencies and even informal community institutions when implementing a research project. These leaders can provide valuable insights and support that can promote the success of research projects. It can also build the government’s capacity to regulate new classes of technology, if required. The recent expansion of machine learning algorithms (and their penetration into areas of consumer finance, retail, and even government services) has broadened most governments’ understanding of public safety and consumer protection. There is a rich literature on the fairness, accountability, transparency, and ethics of machine learning in the “real world,” and development engineers would benefit from exposure to these ideas (Abebe et al., 2020).

Finally, maintaining a safe research environment must consider the risks experienced by students, enumerators, and other research staff. Research can introduce risks to students, and the communities they engage with, and these should be anticipated and managed (Pinto & Upshur, 2009).

The Money

As the saying goes, “money talks,” and in development engineering, whoever provides the funding has significant, sometimes subtle, influence. Whether the funding comes from a multilateral agency, a national government, a philanthropic foundation, an individual donor, or a private enterprise, money almost always has its own agenda, and that agenda may not fully align with the goal of altruistically serving beneficiaries. That tension can be the cause of ethical challenges. Just to provide some examples:

  • Donors often want to take credit for large-scale impact. To this end, they may prefer that millions of people are “reached” or “touched” by an intervention, whether or not it has any positive impact.

  • Governments are sometimes mired in politics, whereby projects underwritten by one administration are redirected, canceled, or sabotaged by another administration.

  • For-profit companies need to break even to survive. They may cut corners, seek richer customers, or misrepresent impact in a bid to become viable.

These are just a few examples, but all of them have consequences for fund recipients. The ethical development engineer will be forced to consider and make difficult decisions. In the best of cases, a funder may be appeased with minor tweaks to the optimal path for beneficiaries; in the worst, practitioners may feel they have no choice but to end their involvement (only to see less ethical colleagues take up the project). Sometimes, even the option to withdraw from a project may cause additional ethical problems.

Like many ethical quandaries, there are no easy answers to these dilemmas, but there are heuristics to minimize or mitigate them. First, many ethical issues can be avoided through sufficient advance dialogue, planning, and transparency among key stakeholders. If it is known up front what a donor most hopes for with their funds, alternatives can be considered and expectations can be managed; and again, in the worst case, it is possible to walk away from the funds without having engaged a vulnerable community. If a donor agrees to a set of plans for a project, including the expected trajectory after the period of funding, then they have little basis to complain when the plan is carried out. (Of course, a thorough plan would also figure in a range of contingencies.)

Next, ongoing transparency and honesty can also help. When unexpected situations arise, communicating them to donors can avoid worse disappointments down the line. Most donors engaged in development work genuinely care about beneficiary communities – and even those who do not will still prefer to appear to care. If a project cannot go as planned because it turns out it would harm beneficiaries, a change of course can often be negotiated.

Finally, it is worth remembering that despite the inherent power imbalance between funder and funded, sponsors of development engineering need the engineer. When anything requiring an ethical breach is requested, the engineer has power to push back, even if based on the threat of ceasing the work. Exercising that power when necessary ensures that the development engineer is not complicit in problematic action.

4.5 Impact and Scale, Take 2

At this point, we revisit the issues of impact and scale that were discussed in Chap. 3 but incorporating some of the discussion in this chapter. One critical skill for the development engineer is the ability to make judicious trade-offs among ideals of practice. Ultimately, that judgment comes from experience and reflection (another reason for humility), but we offer some suggestions below.

Scope

Few problems are purely technology problems, because technology does not work itself – it depends on capable users as well as ongoing maintenance and upkeep, all of which in turn requires favorable social, cultural, economic, institutional, and political conditions. Human beings are creatures of habit, but any new technology necessitates that someone do something differently, whether it is for homemakers to adapt to a new type of cookstove, for healthcare workers to perform their rounds in a new way, or for bank regulators to devise new policies for mobile payments. All of these efforts are extra-technological and therefore require something beyond technical engineering. Any engineering project in which these other efforts are not addressed is bound to disappoint.

But, neither is it generally realistic to expect engineers of development to train users, supervise healthcare workers, or affect policy. That brings us to a critical question that we believe all development engineers should ask early in a project: “Is there a potential organizational partner with shared impact goals, that has the relationships, the capacity, and the commitment to address most extra-technological challenges?” If not, it is worth reconsidering the project. If so, working with such a partner is perhaps the most effective way to ensure meaningful impact. There are many benefits to working with a capable partner aligned with one’s objectives: A good partner will often have good insight into beneficiary context and aspirations; they will serve as a sounding board; and they often have critical relationships necessary for larger-scale impact. Perhaps most of all, partners free the development engineer to focus on the technical, economic, and social innovations they are best suited to contribute, while ensuring that implementation is of high quality. The alternative is for the development engineer to establish such an organization themselves, but that requires a skillset and level of commitment well beyond development engineering.

External Validity and Scale

Do outcomes that hold for one community would hold for another? That is the question of external validity (Cartwright, 2011; Banerjee et al., 2017), an issue critical for development projects that seek impact beyond their pilot community. Strictly speaking, even the most rigorously run evaluations cannot claim validity beyond the group from which participants were sampled or the context in which the trial was conducted (Toyama, 2015). Of course, some external validity can be inferred based on hypothesized mechanisms (Pearl & Barenboim, 2014) – the impact of a new medicine will likely transfer from one human group to another, based on universally shared biology; behavioral interventions may transfer based on shared psychology. But, little can be taken for granted (Gauri et al., 2019).

And, because context often changes with scale, external validity is also a concern as projects scale up. To some extent, the computer software industry has internalized this lesson. Large multinational companies routinely involve “growth” experts at the earliest stages of engineering design, to ensure that new products incorporate enough flexibility to operate within varied market conditions, supply chains, and regulatory regimes. The problem to be solved by a product may be near-universal, like paying for electricity using a mobile phone; but the underlying technology and feature set will vary across geographies and cultures.

An entire profession – alternately called internationalization, localization, or globalization – has emerged around the alignment of software designs with the diverse social norms, business practices, laws, and technical constraints found in different countries (Aykin, 2004). This aspect of product design is complementary to user-centered design and equally important. For anyone who has worked in the field, it is not simply a matter of translating product manuals into new languages. It touches hardware design, database design, system architecture, algorithm development, service level agreements, and so much more (Jimenez-Crespo, 2013). The upfront investment in flexibility allows companies to build for sustainability and scale; it enables expansion to new markets and policy environments without a complete re-engineering of the product.

Whether the goal is to transplant a solution to another setting or to scale up a solution, the approach to external validity is theoretically simple, though often a challenge in practice: Repeat the same innovate-implement-evaluate cycle discussed in Chap. 3 for each new setting and as the intervention is scaled up. What works in one village may need adaptation to work in another or at the district level; what works at the district level will need tweaking for the state or province level; what works for a state will need modification for national impact; what works for one country may need adjustment for another country. And, the adaptations are likely to require both technological and socio-political adaptation. What can be handcrafted at one scale may need assembly line or factory production at higher scales. User engagement that is ad hoc at one scale may need to become systematized, institutionalized, and possibly legalized at higher scales. A useful rule of thumb for development engineering projects is that new research is required at every order of magnitude or two of scale.

Similarly, at each new context or level of scale, it is essential to ensure that desired impacts are continuing, through additional evaluations. Many programs succumb to sociologist Peter Rossi’s “Iron Law of Evaluation: The expected value of any net impact assessment of any large scale social program is zero” (Rossi, 1987). Though perhaps overstated, the iron law points to the very real tendency for project impact to dissipate with displacement or scale, as indifferent bureaucrats and less-informed beneficiaries play a proportionally larger role in implementation.

Cost-Benefit Analysis

Assuming that evaluations demonstrate an intervention to be effective, another important question is whether the intervention is cost-effective especially compared with alternative solutions. Especially for well-focused goals such as increasing the number of vaccinated children or improving educational test scores, it should be possible to capture project effectiveness with a cost-benefit analysis. The underlying idea is simple – compute the financial cost per unit of impact and compare it against cost-benefit ratios of interventions with the same objective (Dhaliwal et al., 2013). In practice, this can be somewhat difficult as cost data must be carefully gathered, and an honest accounting of fixed costs (that could be distributed over a large program) and variable costs (that are incurred on a per-unit-of-impact basis) must be made; any comparative analysis would also need this information for alternative interventions (Levin & McEwan, 2000; Brown & Tanner, 2019).

What further complicates cost-benefit analyses is that an intervention’s sum total benefits and side effects are rarely enumerable, not to mention measurable. Most cost-benefit analysis requires some estimation of intangible factors, some of which may outweigh tangible benefits in importance.

Even when imperfect, however, some rudimentary analysis is worthwhile to gauge cost-effectiveness. One point of reference is that of the aforementioned unconditional cash transfer, in which beneficiaries are given cash – typically about $1000 – with no strings attached (Weidel, 2016). Evidence is accumulating that such gifts have a range of long-term benefits for poor households, with little negative effect. Proponents have begun to call for such transfers to be considered the benchmark when evaluating development programs (Blattman & Niehaus, 2014) – if a program’s cost-benefit profile does not at least match that of cash transfers, why not just give the costs of the program directly to beneficiaries?

Unanticipated Negative Effects

Critics of development often note that projects have negative unintended consequences (Merton, 1936). Unintended consequences can arise for many reasons. Sometimes, a technology or intervention may cause direct harm as a side effect, as was the case of DDT, a powerful pesticide that turned out to be toxic to many animal species (and humans, when used in large doses, see Carson, 1962). In some cases, a technology can enable harmful forms of mass misuse, as with the global spread of misinformation and extremism on social media (Singer & Brooking, 2018; Fernández-Luque & Bau, 2015). In other cases, an intervention effective at the small scale can backfire at larger scales, as occasionally happens with improvements in agricultural productivity that lead to regional gluts and a subsequent decline in prices (Burke et al., 2019).

Yet another class of unintended consequences occur when one person’s benefit causes someone else harm or the perception of harm. Projects aimed at empowering women or minority groups, for example, can backfire, by incurring hostility and possibly violence from the oppressing group (Sultana et al., 2018). Similarly, poverty alleviation efforts can elicit envy and resentment from neighbors who might also be impoverished but unable to take part in an intervention. Especially with technological interventions, which tend to amplify underlying human forces, existing inequalities may be exacerbated (Toyama, 2015). The resentment that results from growing inequality can provoke conflict, especially when layered on existing divisions of caste, race, religion, or ethnicity.

These are just a few examples of unintended consequences. By definition, we cannot know all of a project’s unanticipated effects in advance, so the best we can do is to “stay with the trouble” (Haraway, 2016), to continue to engage and address negative consequences. This, again, reinforces the need to engage with beneficiary communities throughout, and possibly even beyond, the project lifecycle. Knowing as we do now that there are always unintended consequences, not to do so is neglect and indifference.