Keywords

1 PeaceTech WarTech Interfaces

In her book First Platoon (2021), prize-winning journalist Annie Jacobsen tells the following story from the conflict in Afghanistan. I summarise for my purposes, but she tells it better and it is worth reading in full.

Kevin is working for the US Army in Afghanistan as an expert in ‘pattern-of-life’ analytics, an experimental form of behavioural science. He monitors behaviour of suspected Taliban. He works in a unit that uses a ‘Persistent Ground Surveillance System’ involving an array of cameras attached to a giant tethered balloon, where he watches people from a remote bunker. He has spent time watching and analysing the daily habits of a man wearing a purple hat, who is being tracked as part of an attempt to warn US platoons of impending attack, or track individuals associated with ‘terrorism’. This man has been identified as a ‘bomb emplacer’ who buries improvised explosive devices (IEDs), for the Taliban. If he is identified as in the process of placing IEDs he can be targeted and killed legally according to army rules of engagement. All the information from Kevin, from the balloons and from other sources are fed into a Palantir knowledge foundry system (remember Chap. 9), where it is supposedly crunched with a range of other data. This system is used to identify people and make decisions about legitimate targeting of suspects, but permission to access it is above Kevin’s paygrade so someone else reviews all the evidence and makes the decision.

Kevin is told one day, that the man with the purple hat has been located in the act of emplacement and is about to be killed. He looks at the image feed as the strike is being put in place. But what he sees does not match with the close personal study he has made of this man and his habits. While the computer algorithm has identified—‘this is the man’, Kevin doubts it, and calls-in his doubts, leading to the strike being called off.

Kevin was right, the target was a civilian farmer in a field.

2 Unpicking Ethical Concerns

In the story, Kevin’s human analysis was more accurate than Palantir’s machine-supported deductions. Jacobsen shows that there is really no transparency to enable understanding the Palantir algorithm and therefore why it ‘went wrong’. Indeed, it is likely that those relying on the foundry did not understand how it worked, and—we may speculate—perhaps neither did Palantir entirely itself.

Jacobson’s book is eye-opening in showing just how far WarTech has developed. She focuses on the use of personal and biometric data in Afghanistan, including iris scans, fingerprints, photographs, occupation, home addresses and names of relatives. Data was used to track Taliban as Jacobsen describes.

Personal biometric data was also used in Afghanistan as part of GovTech to try to address corruption—for example, the payment of ‘ghost soldiers’, that is people on the books as soldiers being paid salaries who did not exist. The existence of ghost soldiers has been blamed as one of the reasons why an army of 300,000 fell so quickly to the Taliban once the US pulled out, in August 2021—many of the 300,000 did not exist (see BBC, 2021).

In a further twist to the tale, when the US and UK and other NATO countries withdrew from Afghanistan, this biometric data was left behind without any security. It now appears it is being used by the Taliban to identify and kill former government workers who they perceive as enemy (Human Rights Watch, 2022). In other words, the systems used as GovTech to enable civicness, are now used by enemies of civicness to target and kill.

Afghanistan graphically illustrates how projects ‘for good’, become intertwined with war efforts in complex and unpredictable ways.

Also worth noting—the story opens a window into the fragmentation of conflict and peace that I have labelled one prong of ‘double disruption’. Afghanistan was invaded in 2001, by an international coalition led by the US, to destroy Al Qaeda, post their involvement in the Twin Tower attacks in New York. This displaced also the Taliban who had supported them and were in government. What followed was a paradoxical international attempt to incubate a ‘locally owned’ transition focused on acheiving stable and democratic institutions. Over following 20 years, this transition became ever ongoing, and overlaid new transitional structures and processes on old ones. The transition negotiations and outcomes excluded the Taliban, until a parallel deal was signed in 2021, not between Afghans, but between the Taliban and the US, leading to US. This deal provided for US troop withdrawal the following summer. The Taliban used fragmentation to sweep to power in a show of unity, but now find themselves dealing with ongoing fragmentation, including in the range of armed groups they encounter. Afghanistan illustrates the flaws of internationally constructed transition, and the complex ways in which efforts connect to digital transformation of securitization and war. It also illustrates what our wider research indicates is a characteristic of the new conflict landscape: ‘critical junctures’ arise that have capacity to create sudden reversals from peace trajectories to war outcomes in a matter of days.

Against such a peacebuilding backdrop, how can we begin to think about the ethical challenges for PeaceTech, and what frameworks and regulations exist to govern them?

3 Ethical and Moral Concerns

Particular dynamics in the conduct of war, raise distinct ethical, data protection and harm issues than those of more peaceful contexts. Distinctive ethical challenges arise even more when new technologies are thrown into the mix. As the issues are legion and often very specific to the type of digital innovation in being deployed, I will point in a general way to the gaps in existing frameworks, and resources that are beginning to plug them.

There are three quite different sets of ethical questions, using the term broadly, that should be considered as part of PeaceTech design, that I use to structure this discussion.

Ethics and Impact concerns. The first set of concerns focus on the impact of PeaceTech interventions and ensuring that PeaceTech is not inadvertently supporting non-peaceful activities that can hurt people. That PeaceTech cannot be ‘flipped’ to WarTech. These concerns are common to all peacebuilding interventions, and have existing legal and policy frameworks, but digital innovation in conflict areas poses additional challenges that these frameworks often do not cover.

Good Practice and Process Concerns. The second set of concerns focus on questions of ethical design of PeaceTech. These concerns reflect a wish to design PeaceTech so as to protect against potential negative impacts on people and processes. Good practice ambitions are also driven by a wider set of ethical commitments to particular forms of practice that peacebuilders understand to go hand-in-hand with the type of peaceful outcomes that they are trying to achieve. These commitments include: equitable partnership between global north and south; fostering greater inclusion; mitigating climate impacts; and using practices that support rather than undermine social justice. There are emergent good process frameworks, but they are scattered and various in ways that undermine their systematic application.

Technomoral Concerns. A third set of concerns arise that we can label as ‘technomoral’ concerns that operate to try to deal with the ‘what are we doing really’ questions that I have asked at particular points in the book. Technomoral concerns focus on how digital innovation shapes our lives and world in ways that relate to human flourishing because they create or destroy a world we might want to live in (Vallor, 2016). We have very few systems at all for guiding technomoral approaches to PeaceTech into practice.

We will explore the challenges of conflict-contexts and the frameworks in each area as a form of mapping of gaps and emergent guidance.

4 Impact Concerns: Ethics, Harm and Data Protection

The first set of concerns attempt to ensure that PeaceTech does not harm people. These type of concerns are common to all research and peacebuilding enterprises, but PeaceTech poses five distinctive challenges for how we identify and manage risk of harm. All have been illustrated by stories throughout this book.

  • Digital technologies often produce detailed geolocated population-specific data that can focus violence on groups. However, our ethical and data protection frameworks tend to evaluate the risk in terms of individuals. Geolocated population data, in a context of conflict, raises a need to evaluate PeaceTech plans in terms of whether and how it could be used to target vulnerable groups of people.

  • Any digital innovation needs awareness of cybersecurity risk to be able to account fully for the risks to individual and populations. Using data responsibly is more than just a matter of individual consent and privacy issues. It involves being aware of what data is feeding analysis, understanding where it is stored, pre-identifying risk of issues like hacking, and putting in place mitigations strategies, that may themselves need digital fixes such as forms of online security (UN OCHA, 2016).

  • Digital technologies such as satellite or other aerial technology, even when used for peace research and peacebuilding, produce forms of knowledge that have a value as ‘intelligence’. They come close to the knowledge that intelligence agencies gather, with consequences for unpredictable spill-overs from PeaceTech to WarTech.

  • Peace processes, and institutions and organizations providing public authority, are themselves a vital target of peacebuilding or civicness initiatives. Damaging them has consequences for the long-term levels of violence and trajectory of the conflict. Current ethical frameworks often do not require consideration of how a process or set of institutions might be affected.

In Universities, frameworks and processes exist to manage ethical concerns, through ethics approval systems, data protection frameworks, and risk management matrices. These types of frameworks also exist in business organizations, and non-governmental organizations that inhabit the PeaceTech ecosystem. However, how robust the framework is, whether it is a guide or is independently reviewed, and whether review has any context-specific expertise, will vary between different types of actor in the PeaceTech ecosystem, and often is also dependent on their size. On our PeaceTech collaborations, as we think these issues through, we have found that there are also tensions in how levels of risk are understood between Western institutions, and researchers in the field. While University processes are robust as processes of formal review, NGOs sometimes have better practical protocols for the actual practice of safety in the field.

Often countries experiencing conflict, legal and policy frameworks do not exist, although of course good practice can still be implemented and frameworks used. Afghanistan, for example, had and still has no data protection law. However, in the US, UK and EU, for example, data protection is backed up by legislation, and tends to apply to research projects ‘abroad’ because it focuses on where data is held and processed, which is ‘at home’. Even where there may be technical legal gaps, organizations act to apply legal standards even beyond where they are strictly enforceable as institutional good practice.

Let us look at a few of the relevant frameworks in general terms, to understand how they deal, and do not deal, with the above challenges.

4.1 Ethical and Data Protection Frameworks

University researchers have to apply for ethical approval which consider issues of ethics and risk, and also need to file data protection plans. These will be governed by organizational policy and by law, and often also by the terms of any funding.

However, relevant ethical frameworks are modelled at heart on medical ethics frameworks and a model that works a bit like the questions that we might ask before we test a new drug. These frameworks ask whether the person has given free and prior informed consent to be a ‘research subject’, and been made adequately aware of risks that the researcher understands to exist, and has been given capacity to withdraw at any time. Typically, the forms also ask about potential risks to researchers from conducting the research. Some risks can be mitigated, and all risks have to be balanced against the value of the research. Some risks cannot be mitigated and are too high, and then the research cannot take place.

The ethical frameworks therefore focus on whether there is an anticipated harm to individual research subjects and whether it has been adequately dealt with.

Universities and some peacebuilding organizations will have in place similar rules and procedures for data protection. These are also individual-focused. They are driven by the idea of an individual having privacy rights with respect to their own data, and consent is the basis its for use by others. Data protection requirements rise the more that ‘personal data’ such as name, address, race, gender, that could specifically identify is recorded and used. They also tie the use of scope of permission to use data to evaluation of what information is strictly needed for the research.

As regards PeaceTech innovation in conflict contexts, these frameworks and procedures leave clear gaps, as outlined above. Most critical is what the UN Office for the Coordination of Humanitarian Affairs (OCHA), calls data relating to the ‘time and place-specific activities of affected populations’ that is, ‘spatiotemporal metadata’ (UN OCHA, 2016, p. 3). Ethical and data protection policies, forms and processes, do not prompt researchers to consider whether, even when there are no individual ‘research subjects’, whether population-specific risks can be created by PeaceTech methods and data.

Group-based data in conflicts is highly political. In Bosnia, at one point during the conflict in the 1990s, there was graffiti saying ‘every Yugoslav war started with a referendum’. Referendums used to ascertain sovereign wishes, could also be used as geolocated targeting maps for those who wanted to create ethnically ‘pure’ areas that would enable borders to be re-drawn and future referenda to be won. Time and place population data regarding activities, identity, or political views, has a WarTech value. In practice, most peacebuilders are acutely aware of the controversy of group-based spatial data in the contexts they work in. However, there is no protection framework for these concerns within many of the key institutions supporting PeaceTech, unless those undertaking the work create them.

Second, there is no specific framework for considering whether and how cyber-risks have been evaluated, and whether sufficient research has taken place to understand what they might be in a conflict zone characterised by cyberwarfare.

Third, ethical and data protection processes often have little context-specific expertise to offer the question of how data might be misused in the conflict in question. If those involved do not alert to the right risks, those reviewing, certainly in Universities, are unlikely to have country or conflict expertise at all.

Consider, for example, satellite data as described in Chap. 12. It will often not focus on individuals, will not therefore need their consent according to standard ethics review practice, but is now sufficiently low cost or open access to be used on occasion by researchers and non-governmental organizations. Yet, researchers and organizations may have little contextual knowledge or capacity to anticipate how the data might be used by local actors (remember Nick from Chap. 1). They may also be insufficiently technological expert in the type of data to understand its possible flaws and misrepresentations (remember the Palantir system from the story at the start of this chapter). The ethical forms involved are unlikely to uncover these issues or deal with them.

The Harvard Humanitarian Initiative (HHI) in 2010 has usefully shared its experience. HHI joined the Sentinel Project to use satellite imagery to monitor the border region of Sudan and South Sudan to detect threats to the civilian population. They reported on troops massing or moving, and what looked like possible attacks on civilian housing, and possible mass graves. While the satellite imagery provided access to areas that otherwise could not have been monitored internationally, and while precautions were taken on how images were released (for example editing out landmarks and coordinates), the HHI concluded over time that:

the impact of the collection of imagery and the release of reports on many different types of actors, on the ground and at the international level, became increasingly consequential yet unpredictable. Thus HHI could no longer assess the potential risk the project was exacerbating, not could it causally determine when it either mitigated threats or magnified them. (UN OCHA, 2016, p. 12)

Whew!

Having assessed that it simply could not act responsibly on the project, HHI acted responsibly by leaving the partnership, and working to research what a good framework of common operational doctrines and ethical standards to address these issues might be. (UN OCHA, 2016, p. 12).

Beyond satellite data, other data such as social media t groups of people with place and time, depending on how it is used.

4.2 Research or Intelligence?

Where information such as the satellite images HHI used, the data can really be a form of ‘intelligence’, that governments may have better access to already, but may not. Again, we have no frameworks to enable us to draw a clear line as to what makes something a ‘research methodology’ and what makes it ‘intelligence’, or to prompt how to evaluate when and how that might matter. Technically all research that provides ‘data’ can be used for all sorts of purposes. But use of open source imagery has an immediacy that is illustrated by Nick’s story, and just feels different. Heather Marquette suggests that some protection from toppling from research into intelligence gathering is given by simple application of rigorous research standards. These include: specifying a research purpose of using satellite imagery; justifying its use; adhering to research ethics principles such as anonymity; openness the use of the of the images and of the research; and educating people about how to interpret the images. But she also calls for honesty that even research rigor will not protect against intentional use of data as intelligence and points out that this will typically happen without the researcher’s knowledge (XCEPT seminar).

4.3 CyberWar Risks

The application of ethical and data protection standards to digital innovation in conflict zones, is further complicated by the heightened cybersecurity risks of many conflicts. Cyber-insecurity in conflict zones is often of a different nature and scale and quality than in other places. It cannot be accounted for by quickly racking one’s brains and asking ‘are there any risks?’ while filling in a form. Cyber-risk has to be researched in-context to understand the consequences of even non-personalised data collection.

In Yemen, for example, internet provision is itself a key ‘front’ in the conflict—so much so that any quick explanation of exactly how, is not possible (for more detail see Combs, 2020). Suffice it to say, the Houthis who are a key armed actor involved in the conflict, and the current fractured government of Yemen which is tied up with Saudi Arabia and United Arab Emirates who are also involved in the conflict, have two different internets. YemenNet is controlled by Houthis and AdenNet, was set up using another ISP by United Arab Emirates and Saudi Arabia to break Houthi control. These rival internets have different geographic coverages, and operate different forms of censorship. This type of cyber complexity means that something as simple as encouraging people to communicate thought computer, and which internet provider they use (something that is easily remotely discoverable), can give a sense of location and possible political affiliation. Plus, internet providers can monitor all individual online activity. Knowledge of the realities of internet provision, censorship and capacity to monitor, is clearly relevant to a range of PeaceTech methodologies. Without an audit of how the internet works, how is is controlled, key insecurities and risks will be missed.

In fact, a whole lot of things that we think of as fairly ‘safe’ or having non-physical consequences in Western societies, can be subject to a different level of consequence in conflict countries. WhatsApp groups that can easily be hacked and people targeted for views; use of Twitter/X can be monitored at scale; a ‘self-learning environment’ computer may fall into the control of local armed leaders.

4.4 Dual Use Restrictions

There are other forms of ethical framework in place which seem potentially able to ‘get to’ PeaceTech issues. Dual use frameworks in research ask researchers: ‘does your research have a military application’? If one answers ‘yes’, the research cannot be undertaken under normal grant frameworks, for example be funded by the EU. Yet again, however, the framework does not seem to fit. The challenges of double disruption and ‘modularization’ means that almost any digital innovation that can be developed for PeaceTech to make peacebuilding more effective, can also be unscrewed and attached to war, to make war more effective. So nearly all use of normal digital devices to gather information is dual use in this broad sense.

The dual use tick-box does not seem to contemplate this. It creates a system where things are either ‘in’ or out’. For example, the ERC Frontiers grant requires a declaration to the following effect: ‘We declare that the proposal has an exclusive focus on civil applications (activities intended to be used in military application or aiming to serve military purposes cannot be funded)’ (see for example Horizon 2022 Small Grants). Whether, a software application, for example, is dual use therefore seems dependent on the intent the user brings, rather than the possibility of actual dual use.

To be fair, there are other dimensions to the policy, including guidance for researchers, and a link from this research policy, to a legal framework for export licences to help elaborate dual use. However, these merely help in ruling-in very particular types of military-related hardware as always requiring a licence because they are always potentially dual use, and ruling-in forms of public software as always ok, and not requiring licences. This approach may be appropriate for grant of export licences, but the policy as a whole is not designed to guide questions of how digital methodologies designed for non-military purposes and used in research, could still be considered to raise a risk of dual use in ways that need considered and mitigated. Working groups continue to try to work out a better framework (see for example, Bromley & Gerharz, 2019).

4.5 Risk to Peace Processes: Too Much Knowing

PeaceTech entrepreneurs working to support mediation processes to end wars, might also want to be concerned not just about harm to individuals, but harms to the peace processes they are aiming to support. Our current formalised standards within the research environment do not raise this question as relevant to ethical or data protection concerns. But should they?

What if even good data harms a perfectly good peace process, damaging the prospects for a good outcome. As I grew up in Northern Ireland, it was heartbreaking when secret peace talks were revealed before they had a chance to produce any real compromise and the parties who did not want to be seen to be compromising all jumped away from the process. Decades of conflict and death often followed.

Investigative journalism and public transparency is a good thing. But where a sensitive process is at play that holds the prospects of life or death, should that process be protected from things like ‘publicity’ or ‘transparency’ at particular moments? This might be particularly important where we have all sorts of innovative ways of digitally communicating so that any form of secrecy is difficult to achieve. Discussion with local political leaders in several contexts points to the dangers of ‘too much knowing’—who has met with who, and discussed what, and why? Often political exploration of unthinkable moves, needs space.

If we seek to undertake PeaceTech to support wars ending, should we think in terms not just of a duty of care towards research subjects (that is people) and geolocated populations, but also a duty of care towards peace processes? Or is that too political? Do just we trust to our broader ethical standards as sufficient, and require people try to protect their peace processes in other ways—for example, building public buy in for the idea of political compromise as having a value? Or is there sometimes ‘too much knowing’?

4.6 ‘Do No Harm’ Frameworks

For issues such as harm to processes, humanitarian actors seem to have a slightly more appropriate framework than that of ethics or data protection. That is the ‘do no harm’ framework. This framework also has its roots in medicine and the Hippocratic Oath. In the humanitarian world it is understood as a framework to help apply the seven fundamental principles of humanitarianism: humanity, impartiality, neutrality, independence, voluntary service, unity and universality. ‘Do no harm’ evaluations, aim to support questions of what these values mean when applied to a particular proposed intervention. Do no harm, involves asking not just about risk to people, but of unintended harm to institutions and processes. Harm could include how emergency healthcare might affect a national healthcare system down the road, or how food aid might impact on the livelihoods of local farmers. Do no harm frameworks in essence critically question the range of short and long-term consequences that could flow from trying to ‘do good’ (see eg, ICRC Principles).

Yet, there are problems with adopting do no harm in practice. Steeped in debates regarding the philosopher John Stewart Mill, I have always found it less than helpful in deeply divided societies experiencing conflict. How do we know what harm might result? When should that stop immediate action—how would we calibrate the risks of doing something, versus the risks of doing nothing? It is always easier to do nothing, because that has no risk, right?

People living in conflict, seldom do nothing. They work around and through the conflict, they create ‘normal lives’ in the midst of abnormality. They take incredible risks to travel through war zones to possible safety, or to build peace, or fight for justice. They risk their political capital to call a ceasefire, and take a chance of something different. They experiment with how to make connections, or create new ways of doing things. They do all this because they do not have the luxury of doing nothing. Sometimes that carries the biggest risk of all. Yet, they own their own context, and can make these decisions as political decisions. External researchers must make them another way.

With all its faults, applying a ‘do no harm’ approach to PeaceTech design is useful in prompting consideration of issues of harm that go beyond immediate harm to people, to consider possible political consequences, and at least prompts useful deliberation that can inform how things move forward. Ethical review processes could usefully ask for a do no harm evaluation that is broader than harm to individuals.

5 Process Concerns: Ethical Design

Our second set of ethical questions relate to ethical design. That is, whether the ethics of the methods match the ethics of the outcome. The issues again, have been touched on throughout, and the main ones can just be outlined.

Inclusion. Peacebuilding at many levels is about inclusion, and inclusion has been a major challenge and concern regarding peace processes. Particular, norms exist on the inclusion of women in peace mediation, such as UN Security Council Resolution 1325. Legal challenges have been made to peace agreements and the constitutions that result, on the basis of the exclusion or non-dominant minorities by peace process power-sharing or devolution deals (see eg Sejdić and Finci v. Bosnia and Herzegovina, Grand Chamber, European Court of Human Rights, 2009). Digital innovation, as we have seen, is often justified as increasing inclusion. Yet, digital inclusion in fragile and conflict settings continues to be a major issue.

For example, women often have much lower use of mobile phones, and rural peripheral communities where minorities may reside can lack internet access or mobile connectivity at all. As we have seen in the Yemen example, divided societies and tactics of cyberwarfare and censorship often mean conflict-related biases in provision of digital connectivity. Further, we may simply not know what the conflict-relevant biases of technology and its use—such as ‘who uses twitter’—are.

Other problems can be created by PeaceTech, such as uneven inclusion. Women in many conflict areas will face cultural and safety barriers and additional costs to travel, for example, where they have to be accompanied by a male family member. Digital access to peace talks, post-Covid has occasionally seen men participate in person and women facilitated remotely. Yet these are quite different forms of access.

All of these factors can mean the offer of PeaceTech inclusion, can come with more hidden exclusions. The biases of who has access to technology can damage the peacebuilding outcome, or peace process itself, particularly in a context already full of exclusions, distrust and disinformation. Use of PeaceTech therefore needs to involve a prior audit of digital inclusion, and steps to account for or remedy bias, often through supplementary analogue processes.

Environmental Protection. PeaceTech offers ways of cutting down environmental impact, such as where online meetings are used rather than in-person ones. Data, however, has energy needs that are sometimes quite significant—for example in the large data warehouses behind the cloud. Mobile phones use precious metals that are often mined in conflict zones, in ways that fuel war economies, such as in Democratic Republic of Congo. Thousands of small satellites create space debris. Plus, Planet Lab’s five year partnership with Musk’s SpaceX, supports a project that also aims to make space travel an ordinary and regular human adventure, an aim that is paralleled by other similar ambitions of other Tech billionaires. If achieved, mass space travel would have large-scale environmental consequences. The environmental impact of PeaceTech, and the connection to conflict itself, is now always easy to unearth and quantify against the alternatives.

Corporate Social Responsibility. Most large-scale businesses have adopted some form of corporate social responsibility standards that commit them to reviewing their own practice on issues such as inclusion and diversity and environmental impact. PeaceTech entrepreneurs can therefore choose partners carefully in this regard. However, it is more difficult to understand and assess the multiple relationships that digital providers are involved in as illustrated by the Planet Labs-SpaceX partnership. Additional issues can arise due to the secrecy surrounding some proprietary technology of PeaceTech providers, that can preclude understanding what firewalls are in place between different customer uses. In 2019, for example, Palantir, the company in the Afghanistan story at the start of this chapter was engaged by the UN’s World Food Programme (a Nobel Peace Prize winner in 2020), at a cost of $45 billion, with potential to improve alleviation of hunger, but raising serious concerns about data protection of millions of the world’s most vulnerable people (Parker, 2019).

Good practice for the use and design of digital technologies in peace processes, requires that frameworks for assessing, avoiding or mitigating these process concerns are developed and applied.

6 Techno-Moral Principles

Even deeper ethical questions arise related to ‘what are we doing really’? The idea of techno-morality for the digital revolution, tries to understand how digital technology changes our life invisibly by structuring our relationships, our aspirations, our activities, and our use of time. Remember the social media algorithms promoting disagreement, based on turning all our interactions into a chance to advertise? Shannon Vallor suggests (2016), we should ask, are we using digital technology in ways that will lead to humans flourishing because it creates a world worth living in?

Technomorality switches the question asked by ethical frameworks of—‘what are we doing with the technology, and are we doing it safely?’, to consider, ‘what is the technology doing to us, as humans, as a society’? Should PeaceTech providers think about this question, and how? Again, we have seen a number of persistent concerns that seem relevant.

North-South empowerment and equity. How should supply and demand relate to digital innovation, and where it occurs? Digital innovation clearly should not always come from the north to add ‘hacks’ to peacebuilding interventions undertaken in the South. PeaceTech efforts to develop digital innovation responsively, seek to avoid supply-driven PeaceTech. However, digital innovation does not happen ‘in one place’, so how should we consider equity and participation in the underlying structural capacity for PeaceTech arises: where data warehouses, fabrication and innovation have their infrastructure located? What does ethics require—would replication say of data warehouses in the global South be a good thing? Or is this about using existing capacity in better service of global South inclusion? There can be arguments either way.

Morality of relationships, knowability and distance. Multiple instances of PeaceTech are about engaging bigger populations, through remote engagement. Is this a good thing? Even if we could factor out disinformation and inequities of who it reaches? How does remote participation restructure relationships of trust if we focus on ‘knowing’ as exchange of information, rather than focusing on ‘getting to know’?

What if remote access enables international actors to work ever-more remotely. There are gains of efficiency, capacity and even perhaps local ownership. Or is there an ethic of ‘being present’? Seán, does not exist, and if he had was ‘in-country’, but similar people were at pains to tell me that most ‘Seáns’ are knowing and uncomfortable with the inadequacies of their ‘presence’. As local expertise illustrates, a micro-knowledge is gained invisibly through every movement through a street, every interaction at a marketplace, and through the interpersonal relationships of friendship and co-working. It is even gained in every ride in a taxi taken between UN buildings, and observed even through darkened windows.

From CSR to Cultivating Technomoral Virtues? And should we be concerned, beyond CSR about how the providers of PeaceTech are creating the wider digital transformation world? Some of the personal views of key providers of PeaceTech technology, seem completely at odds with notions of ‘civicness’, with no commitment to inclusion, moderation of hate speech, or even democratic elections. Some of these personal views translate into their business models. How does this sit with the idea of ‘technomorality’? Is there a need to ask, not just, ‘is this company engaged in providing something I do not like?’, but also, ‘is this company creating a world I do not like’?

To be clear, technomorality is not a matter of judging the moral worth of Tech entrepreneurs, it is a matter of considering one’s own moral stance with reference to the social reality being created. Asking—‘is this a world I want to participate in building?’ Our discussion of whether we view PeaceTech practices of retro-fitting, modularlization and hacktivism, in a sense engages with trying to understand what sort of world PeaceTech World is and what we are doing when we participate.

7 Emergent Responsive Standards

New ethical standards addressing all of the above types of concern are emerging both for particular new technologies, but also for PeaceTech in particular. One difficulty is that there are so many different ones, and they are emerging and being developed so constantly, it can be quite a task to try to synthesise them all and use them to guide action. We have found the following really useful, so I offer them as examples.

The UN advocates and aims to comply with the use of FAIR Principles for Scientific Data Management and Stewardship, regarding the generation and use of data, and these Principles capture many of the lessons I have shared. FAIR Principles elaborate a framework for ensuring that data is: accessible; interoperable; reusable. I also find OCHA guidance on data management very practical and useful in providing both standards and process advice, for data-driven projects in conflict contexts (UN OCHA, 2016).

Particular sets of standards have been developed for particular technologies. Use of earth observation technology, for example, has multiple guidance frameworks address issues such as how to understand error and interpret images properly. A quick appraisal of these standards provide a sharp warning as regards the scale of expertise needed to use these technologies responsibly. Perhaps the best advice is to partner with knowledgeable organizations and research the relevant frameworks, if delving into a new area.

Standards are also emerging from within the PeaceTech ecosystem to offer PeaceTech-specific guidance relevant to the above challenges.

Build Up for example, has established Principles for Digital Development, that are intended to guide good digital development, as part of an ongoing commitment to establish good practice frameworks. The Principles resonate strongly with our learned-experience of what constitutes good practice articulated earlier. The Principles focus on how to build processes that establish the right types of relationship, rather than evaluating just uses of data for harm. They are also working to address ‘current ethical challenges’ and plan further development of PeaceTech guidance.

ConnexUS, of which Build Up is a part, has set out frameworks which go much more to the underlying technomoral issues. A Peace Impact Framework (PIF) for example, tries to go beyond ‘do no harm’ to suggest the type of iterative cooperative practice that might be useful to doing good, based on three pillars—lived experience; aligned measures; and shared reflection. This PIF in essence aims to build civicness as a practice of PeaceTech, rather just concentrate on avoiding harm. ConnexUS is also piloting a ‘grounded accountability’ (GAM) model along with Everyday Peace Indicators, to create more community-driven ways of monitoring and evaluating peace outcomes. While not focused on digital innovation, the GAM offers ways of approaching data ownership, feedback loops, and equitable partnership.

Guides are also emerging on particular PeaceTech methodologies that focus on the mediation at the heart of peace processes. The UN Department of Mediation has set out Guidelines on Digital Mediation in the form of a Toolkit, mentioned earlier (UN DPPA, Centre for Humanitarian Dialogue, n.d., 2019). UN DPPA and Swiss Peace have also set out a practical framework for using social media in mediation (UN DPPA & Swiss Peace, 2021).

These efforts are weaving good practice standards around PeaceTech, in ways that are thoughtful about the particular peacebuilding challenges in peace processes, and tailored to helping navigate them.

Yet, they face a central difficulty of keeping pace with digital innovation. Sullivan, for example, argues we live in a world where we simply cannot regulate to keep up with digital transformation because ‘how the tech works’ constantly outpaces ethical frameworks (2022). His ideas suggest that the value of new frameworks may lie less in providing static ‘regulation’, and more in how they create assemblages of people who entangle in ethical deliberation across the PeaceTech ecosystem and its very different types of provider with very different ethical commitments. Sullivan argues that ‘applying ethics’ involves asking how knowledge and relationships, empowerment and disempowerment, are being defined and co-created. This is a shift-shaping approach to ethical regulation, for a shift-shaping digital conflict world. It seems to respond to the reality of digital innovation, and to involve a form of ‘technomoral’ commitment. However, ethics as creating relationships, also seems difficult to translate into a clear-cut practice whose outcomes institutions can stand-over.

8 Conclusion

Thinking back to the idea of Peacebuilding ripples, and peacebuilding as the practice of building civicness, it becomes clear that impact ethics, process ethics and techno-morality are deeply connected. The means of peacebuilding such as creating trust, are often also its ends. Supporting civicness means embedding civicness as an ethic of production. Doing PeaceTech requires being part of a community that develops ethical strategies, as much as ethical frameworks. Those involved should work to develop a practice of behaving ethically, legally, inclusively, equitably, environmentally, and techno-morally.

Got that? Right.

Questions

  1. 1.

    How concerned are you about the peculiar ethical, data protection and technomoral challenges of PeaceTech?

  2. 2.

    How confident are you that we can design better ethical, data protection and technomoral frameworks to deal with them?

  3. 3.

    Is there a risk that overthinking leads to inaction, whose harm we seldom evaluate?