Keywords

1 Learning Through Doing

When we began our work in the PeaceTech field, it was with a sense that everything was doable. It was a matter of bringing the right capacities to bear on the right problems.

Then, as I have described, we discovered lots of challenges. Getting things done seemed really hard. It often took longer than we thought, and more commitment. We discovered lots of wonderful people and good practice. But also some frequent issues. Tech wizards offered us unlimited potential, then started to hum and ha, and did not deliver. Businesses who we assumed would cost, plan and deliver work to clear specifications better than our informal in-University collaborations—did not. They often wanted money up front, made no commitment to joint planning, and came back with outputs that only half worked. We encountered both innovative new start-ups who could not deliver what they promised, and established businesses who somehow could not work in straightforward planning processes. (And of course we also discovered lots of wonderful people and good practice!)

To be honest, it sometimes felt as if perhaps we were not ‘doing things right’. Our work risked all turning into experimentation without clear result. We seemed to have entered a business world of smoke, mirrors, and potentially unpredictable and therefore unmanageable costs.

Over time, we have reflected and talked to others in the digital transformation and PeaceTech field, commissioned papers to inform our PeaceTech work, taken digital transformation courses, swapped notes with the most similar data projects, and watched some massive digital transformation failures in our own wider University environment (sighs). It seems the problems we encountered are in fact common.

Many of these PeaceTech problems have been touched on in other chapters. However, it seems useful to draw together lessons and choices.

2 Why Digital Transformations Fail

There are masses of business blogs, reports and academic articles dedicated to ‘why digital transformations fail’. The lists that emerge are very similar, and interestingly, few reasons relate to the technology. Broadly, they include the following:

  • Transforming on the hoof. Not having a clear vision of what you want to achieve in terms of sustainable business outcomes.

  • Not being able to take the range of stakeholders on the journey. Whether those who pay for things in the business, those who must engage with the technology in your own organisation, or your end-users of the business or ‘customers’, leaving any behind spells failure.

  • Difficulty in appropriately staffing projects. Often teams must change, to add the right tech skills, held by people you can talk to and understand, who stay with the project long-enough to give to continuity of delivery. Or with the right researchers, project managers and peacebuilders, to bridge between tech language and capacities, and what ‘peacebuilders want to do’.

  • Getting long-term funding and commitment to the digital transformation efforts. Production will involve ‘invisible work’ that is costly in time and money, particularly at the beginning. Longer-term you need to be able to sustain work, when the ‘shiny’ first iteration that the funder has already taken credit for is no longer good enough.

  • Not thinking tactically about technological tools and capacities across the organization you work for, and of the end-users you work with.

In our particular ‘peace and conflict field’, we encountered challenges similar to these, in ways that were specific to the peace and conflict field. These often intersected with ethical and moral challenges, which will be dealt with next chapter.

You may encounter others problems, but what follows is an account of ‘what I wish I had known’.

3 When to Do Something

Let’s start with the positive: it is good to commit to innovation where things are not working.

The key commitment that drives PeaceTech is a commitment to innovation. Innovation works best when it responds to a problem. Understanding that there might be a technological solution, however, also requires being someone that is a bit interested in exploring innovative ways to solve problems and making some time commitment to understand technological advances and what they offer.

I became committed to digital innovation that led to the PA-X Tracker for the following quite simple reasons.

First, I have always been frustrated with how much we (researchers) replicate data efforts, in particular in the peace and conflict field, without considering what we might get from creating better ways to combine data. As some of my stories illustrated, data initiatives sometimes move forward in overlapping ways in different organisations. Forms of replication can be useful, even if it looks a bit chaotic. However, I was and remain convinced that as researchers and practitioners we could do much better in bringing data together in intelligent ways, to better support practice.

Digital developments such as APIs, i-framing visualizations into multiple websites, and using tools for collaboration, now offer forms of collaboration that make it easier to cooperate and connect data across institutions, without people having to give up institutional ownership of data and products. This is important because most data projects need ongoing institutional homes and support, meaning that ownership matters. Most big digital developments in the wider words, such as 3G to 4G to 5G mobile networks have come about not just due to better cabling, but about because protocols for sharing networks and collaboration and connectivity were created.

Second, conflict and capacities to mediate ends to it are indeed changing—for the worse. Given that diverse data now exists in good quality and can support what I have called ‘Peace Analytics’, it seemed useful to try to bring that data to bear on the types of agile and adaptive decision-making that those seeking to end conflict must make to address multilevel conflicts that operate as a complex system.

Third, as our own data collection efforts grew and our PA-X Peace Agreement data was more widely used, we garnered a range of quite different ‘customers’ or end-users for our data. This drove further innovation because it seemed useful to develop a range of ways for different types of people to enter the data and use it. This also built the reach of the large-scale work we had already invested in.

Our current drive to produce Peace and Transition Process Trackers, is tied up with these same impulses. But the larger point is—that the innovations all grew from a perceived need.

4 Shiny—Beware!

If you are a person who is attracted to digital innovation—and, strangely for someone so technically challenged, I am (!)—there is a tremendous seduction about the digital world. There can seem to be a million racy projects and a boat that is leaving without you on it. Also the potential seems really limitless. Everything can always go bigger and better, more comprehensive, multi-multi-functional.

I often thought that we could bring data together, or create visualizations or new technological ways of working, and then ‘see what we could do’ with it. Could we see new things? Get new insights? Have a whole new way of working that revealed incredible new research findings?

But that does not work. It doesn’t work because in any digital or data innovation project you make a lot of decisions that could be made lots of different ways. It is impossible to make these decisions in any sort of consistent or coherent way, if you do not know why you are doing what you are doing.

This point may seem obvious. But there is something about the scale of potential of tech solutions and just how shiny they appear, that draws people into experimentation without purpose.

There may be reasons to experiment—if you are doing tech just to learn how to do tech, that is fine if you are honest about it, but even then you will have learning outcomes to drive your decisions. I am also all for exploration as creative enterprise, and in fact we have used ‘visualization as exploration’ as a research methodology in our work (See Bell, Bach and Kauer, 2022). But we still had a sense of ‘why’, that drove how we went about things.

If the why is clearly specified, it is worth also being somewhat agnostic as to whether tech is the answer. Rather than saying ‘I want to create an App to support peace agreement implementation’, it can be useful to be agnostic about the tool. ‘I want to support peace agreement implementation.’ This then involves a series of prior inquiries. What do we think amounts to peace agreement implementation? Where is it going wrong? What can be done about it?

At that point you can consider where technological solutions might solve particular problems such as: wouldn’t it be great if instead of people in field missions all going individually to the same sources to manually put together very similar reports on ‘how things are going’, there could be a website that had this data in easy to ‘visualise and grab’ ways. Technology will only ever be a piece of the solution.

5 Scope Versus Usability

We also discovered a trade-off between the scope of a PeaceTech application and its usability. The story of ‘doing one thing’ in the Ceasefires tracker and PeaceFem App, reflects the advantages of limiting scope to one clear purpose. In Chap. 7, what I called PeaceTech Hacks—innovations that help with one task, were illustrated as a key way that PeaceTech has worked well.

This is a ‘washing machine’ lesson. New washing machines have 50 or more different wash-programmes, when one only really ever uses a maximum of four. These machines can take reading a 60-page manual for several hours to figure out how to use those four. If they just had four programmes it would be easier for most users.

It can be quite tempting at a design stage, to have your data or tech tool do all the things it could possibly do, in customisable ways. But this often takes it beyond what most users will want to do simply, and means it is only usable by the ‘especially dedicated end-user’. And the timescales of production extend and extend. We often found levels of debate and constructive tension with our visualizers because we wanted to limit interactivity to a few features, rather than enable people to explore ‘everything’ from one interface.

Scope/usability trade-offs also exist with regards to data. In our Peace and Transition Process Tracker, which attempts to respond to double disruption, we think the challenge is not to create the ‘ultimate tracker’ with all possible data and complex algorithms, such as CEWS attempt. We think the problem is not ‘more data’, but ‘less’. The PA-X Tracker aims to provide better access and connectedness to the data people already use and trust, in ways that better connect to the questions peacebuilders are asking in a process.

Doing one thing well, however, is not the same as ‘once-off’ PeaceTech design. The PA-X Tracker comes from a wider data collection effort that has a long hinterland and integrity. It repurposed data-interface design from the Amnesty database, and in turn has been repurposed for parts of our new Peace and Transition Process Tracker.

6 Know and Collaborate With ‘End-Users’

‘Know and collaborate with end-users, is the peace and conflict specific exhortation to ‘know your customer and bring them on the journey’. Peacebuilders and researchers often do not think of end-users as customers because we try not to have products to ‘sell’ and the culture of ‘partnership’ predominates, rhetorically at least. But, like a business we want what we work on to be useful and used, and for this to happen PeaceTech innovations have to add value to the peacebuilding world. Our funders also expect this, and they—by the way—are second level ‘customers’ who look at download figures, and monitor and evaluate how well a PeaceTech innovation works and what value it has added (as compared to the money it cost). So, even not-for-profit PeaceTech experiments must respond to questions of ‘value’ and ‘usability’.

We have already mentioned being specific about who you think the end-users are, and what they want to do. Different peacebuilders will have different needs and capacities to use technology. For example, international peacebuilders and local peacebuilders often have quite different agendas for change and ways of working and different levels of digital inclusion. The same tool might not work for both. Others will have different capacities due to things such as the bandwidth available to them, access to a computer, etc. The PeaceFem App, for example, is targeted on a very specific audience of women peacemakers and mediators, and in particular those in the Middle East, and designed to be low bandwidth.

At the design moment, it can be useful to try to describe the end-user and task the application is intended to help very specifically, for example, End-user: ‘the person who arrives in a country field-team without much warning and has better knowledge than the lay person, but does not have the detail of the past peace process at their fingertips’. And Task: ‘This person wants to be able to quickly access past peace agreements and get a sense of the main issues they covered, with capacity to open the whole document easily if they want.’

7 Making Good Tech Choices

The exhortation to ‘make good tech choices’ looks like a different version of ‘shiny—beware’. But it is less an exhortation not to jump to tech solutions, than to make sure that the choice of tech is appropriate to the context and need. Remember PayPal advice: use technologies your customers already use. I would add: and remember if you are actually walking into a shop, cash can often be faster and easier (although PayPal I am sure might disagree). Making good choices requires asking, and even researching, what Tech people already use? Does it raise security issues that they may need to think about more? What band-width do they operate in? Who has capacities to use what?

‘If you have a hammer, everything looks like a nail.’ This phrase captures the idea that often our use of digital tools is ‘supply driven’ rather than ‘demand driven’, in ways that lead us to perhaps do silly and even unhelpful things (like hit something inappropriate on the head). Given that peacebuilding itself is criticised for being too ‘supply-driven’, replicating this problem in PeaceTech is to be avoided.

Critical Choices. We often faced choices relating to the tech tools we used. Sometimes the range of possible tools was overwhelming. Sometimes none of it seemed quite right and we faced whether to work with existing software and tools, or design our own. Low-code, or existing software can often make something quickly doable, sufficient to get ‘up and running’. It can be used by a range of staff without technical expertise, and has often had a lot of time and thought go into making the output look good. It can be good value (or already on your computer), and efficient because you are using a tool rather than inventing one. But sometimes it is just not the right thing, and all the workarounds will become cumbersome.

For what it is worth, we have found that starting with low-code experiments, with software that you have skills to use is a really good way to consider what you are trying to do and what is possible. Over time, you may need to customise or invent. As described with our ceasefire tracker, we used Knight Lab’s timeline tool initially in a no-code form. But when we wanted to design a bilingual timeline in both Arabic and English for Yemen, and those languages were read from right to left for the former, and left to right for the latter, we found it useful to design our own timeline (Yemen Timeline).

We have become less afraid to try to build our own customization when low-code tools start being restrictive. Sometimes this can mean jumping into the ‘coding’ version of a tool to modify it, and sometimes it has meant creating our own visualization completely. Creating our own visualization, also has the advantage that we can leave behind the code for what we did in open source way that is hopefully more useful to others in the peacebuilding field, than what is already out there—if we have found that to be limited for some peacebuilding purposes—such as bilingualism. We hope that this way we can contribute to creating a new research capacity, as well as new research. But we started ‘low’ or even ‘no’ code.

8 Building Digital Team Capacity

The right team capacities are needed at three levels. First, ‘domain expert’ capacity in our case that know the peacebuilding field, and then the right ‘technical expert’ capacities to deliver what you think will respond.

There is additionally a really important middle bit: you need ‘creative translators’ to think about how to connect problem and any proposed tech solution, who can bridge the ‘domain expert / technical expert divide (see Fig. 12.1). Often ‘bridgers’ will have to be people that are domain experts, but have some digital leadership dimension. We have been really lucky on our team to have such people, and we have also worked to expand existing staff skills and think about the skills we need as we work. For smaller organisations, what I have called ‘PeaceTech Enablers’, such as Build Up, may be really vital partners to act as this connective tissue, while transferring skills.

Fig. 12.1
A diagram of connecting experts. P B experts and tech experts are connected through connectors.

Connecting Experts

Specific skills are needed for the bridgers who connect. In particular, having people who can write a technical specification that addresses the peacebuilding need is very important. So also is capacity to test prototypes and translate modifications into further clear specifications for improvement.

In the peacebuilding field, however, you are likely to need to connect groups who have different types of expertise at each end. You are likely to need a range of expertese and skills: peacebuilder practitioner skills, peace and conflict researcher skills, conflict and peace data knowledge and skills. Over time you may also need a larger range of technical skills: people who can install your data on a large scale computer, access to that large-scale computer or data storage facility, data engineers, database designers and visualizers, security advisors capable of evaluating whether the cybersecurity offered is sufficient in your conflict context and risks (see Fig. 12.2). How do you access this expertise?

Fig. 12.2
A diagram of expert clusters. P B experts have P B practitioners, researchers, and P and C data analysts. Tech experts have system engineers, software engineers, and visualizers. P B experts and tech experts are connected.

Expert Clusters

Critical Choices. There is a choice here between whether to stay in-house, or go ‘out’. That is, do you recruit someone onto your team with skills or build up the skills of a team member, or ‘contract out’ support to a partner, consultant or company? The choice will be shaped by budget, and human resource and contracting matters. However, beyond those constraints, for us it often felt quite difficult to know which choice was ‘best’.

We worked flexibly doing what seemed best sometimes in the moment, but looking also a little down the road. Both types of arrangement worked. What seemed important, however, was to create working relationships whether inhouse or outhouse, through forms partnership, business relationship, or other, to enable ‘iterative design’ and an ongoing processes of collaboration. This approach partly reflects that we find we are never just ‘commissioning’ a piece of technical work, but rather we need to engage in co-creation across tech and subject-matter experts. So we need a commitment from technical experts to that process. We can get this from relationships with business providers, but it does not work for us to fit within traditional business models of either ‘buying a job’, or ‘Servitization’. More on that later.

In all peacebuilding expert to tech expert relationships, the most important ingredient to the relationship working is capacity to communicate across very different languages and forms of expertise. It may sound obvious, but again sometimes digital innovation seems as if you should have to take things on trust, as to technical to be able to be simply explained. I now tend to assume that if the technical experts cannot explain to me what they are doing using language and concepts I can understand or learn, and cannot commit to design as process, or I do not commit in the same way, then the project is not going to work.

9 Sustainability in All Decisions

So many PeaceTech initiatives have not been sustainable, although interesting pockets of innovation may have been usefully incubated on the journey. But if you value your project it is really important to think about sustainability in a number of ways.

Think in advance about what is logical to sustain. Some tasks we are engaged in have logical end-points from their start: e.g., the ‘Ceasefires in a time of Covid-19’ App, or our ‘local agreements’ data because we know that an ongoing census will be impossible. In others, we have had to consider: is our data collection effort undermined by thinking of this project as finite because funding will be finite?

Sometimes the answer is ‘no’. The PeaceFem App, decision to focus on ‘significant’ examples, rather than all examples, was in part a sustainability choice, because it means the App remains useful and valid, even if every new gender provision is not added.

Work within frameworks that are not disproportionately costly. If you have created something that you want to sustain you need to think how it will be paid for into the future. Ambitions of scale need to be tempered. Or sometimes, you can work out digital ways to automate tasks at lower cost in sustainable ways, and work on those as part of the initiative. Questions of cost also involve thinking about the tech relationships you get into before commit to them in ways that are difficult to switch from.

Engage with ‘Servitization’. I am not sure what the answer is, but servitization is a problem for sustainability. For Tech providers it is often their business model. Where you want to purchase a piece of work, providers will want to create a service relationship. However, if you end up with multiple service relationships you will have multiple rolling costs, that can suddenly add up to amounts that stop the project from being sustainable.

It can also be really difficult to be sure exactly what the ‘service’ servitization provides. We talked to related databases about whether they did their database design and storage in-house or not. Unlike us their database designers were external providers. The business charged for ongoing storage and all that goes with that. However, the company often announced they an update had created a new security risk in their system and then produced additional bills for fixing them. Our colleagues complained—‘it sometimes feels like they break a window and then charge us to fix it.’ I know the feeling.

Entering a servitization model makes co-creation and iterative design very difficult. So you may need to talk all that through and negotiate a different way of working, or build relationships with tech providers that somehow work around these models. As regards emergent PeaceTech providers, funders often want to see a plan for sustainability for ‘self-payment’ based on the PeaceTech innovation charging on a ‘servitization’ model. This can stand in tension with their desire also to have the tech produced ‘for public good’. Pushing servitization can perpetuate a business model that stalls rather than enables iterative development. It can also mean innovative PeaceTech entrepreneurs are pushed to provide a static ‘do a thing’ business model, rather continue on creative journeys that are more open-ended. Yet, ongoing sustainability needs to be paid for.

10 Design to Future-Proof

There are three main aspects to future-proofing.

Thinking ahead. Sustainability can also be addressed by thinking ahead about the things you will need to change and commissioning the work to not just deliver the end product, but to also deliver easy ways for the product to be customised or extended in the future, as we did with the languages on PeaceFem.

Design for re-use. Often we have designed data interfaces not just for the immediate use in mind, but have also commissioned ways to modify the back-end design so we can ‘re-purpose and reuse’ what has been developed to completely new uses. An example was the repurposing of the Amnesty interface, for the Covid 19 ceasefires.

Document as you go along. We always documented what we were doing, but now I would document even more. If you document your experiences you create capacity for new staff to come in and do the work, but also for the learning to be shared and used more widely than your own efforts. Documentation to ensure your PeaceTech efforts and capacities do not disappear should really cover the nuts and bolts of how the system works, what servers it is on, and what relationships are needed to sustain use, the code used, the passwords, the decisions made, etc. Documentation should be ‘internal’ for new staff to pick up and know what is going on. Documentation should also have an external form—sharing learning and processes and even code with others. We are now working hard on this. This book, to be honest, is an element of our documentation and lesson-sharing efforts.

11 Returning Data and Feedback Loops

There are issues with where data comes from, and where it goes to, that we will discuss more next chapter. However, worth noting for now: it is important to have people ‘participate’ in, use and learn from their own data, and is their feedback itself data that tells us something interesting.

The whole of PA-X was in ways an attempt to pull a peace agreement repository together and return it to the people in-country who had helped create peace processes, and to make it available to others engaging in future peace processes. However, now we also collect perception information on peace processes as part of our new project. This data is collected in surveys from people in-country to compare data on ‘how a peace process is going’ with perceptions in-country, so we can identify where to ‘mind the gap’. How, then, can this be used by the same communities?

Is returning data to those it was drawn from a business need, or an ethical commitment? If you are serious about peacebuilding support, both I would say.

12 Learn From the Local

Peacebuilding innovation is nearly always at its most innovative when responsive to conflict at the local level. This is no less true of PeaceTech. We have definitely struggled with this, but commissioning things in-country is nearly always possible.

13 Complicated Issues

Think about what you are doing ‘really’? What does this even mean? Well…. I think it is useful to remain aware of the criticisms of PeaceTech and issues such as ‘double disruption’ and always question—what am I doing? Or perhaps—what practice of production am I engaged in? This reflexivity involves being aware of ‘modularization’, ‘servitization’, and conflict-peace nesting—all quite complicated things. Or in short: It requires you to think about the ways you are engaged in this world. Are you replicating problematic practices, are there consequences you should be worried about? I address this type of ‘technomoral’ reflexivity in next chapter.

Ethnics, harm, risk and safety. In addition to all these things, you have to think about the consequences of what you are doing in terms of well-established processes of managing ethics, data protection, and risk of harm to people and perhaps to the peace process itself. It is to these issues that we now turn.

Questions

  1. 1.

    How important to you think ‘iterative design’ is? Is it always important?

  2. 2.

    What challenges do these lessons raise for ‘getting started’, or doing PeaceTech as a small local group?

  3. 3.

    Is there something about digital innovation that causes us to think that normal ways of working are not to be applied? Which of the lessons apply to any project management, and what is distinctive to digital innovation in peacebuilding?

  4. 4.

    Do these lessons affect any PeaceTech ideas or plans you have?