Geoff Stead is a mobile tech designer with a wide range of industry experience, most recently as Chief Product Officer at Babbel, the international online language-learning company based in Berlin, and before then as Senior Director of Mobile Learning at Qualcomm in San Diego, California. He shared some thoughts about social impacts of technology from the perspective of industrial design with Dr Clare L. E. Foster, founder of the ‘Re- ‘ Interdiscplinary Research Network (https://www.crassh.cam.ac.uk/programmes/re-interdisciplinary-network).


Clare Foster: As an inventor and innovator of mobile devices and applications for some 25 years, you must have thought a lot about the social and psychological affordances of tech devices and applications as they have evolved.


Geoff Stead: Yes. I have spent most of my career working in technology and education, always curious about how those two things could fit better together. At Babbel we created language learning apps in multiple languages and had millions of learners around the world logging on to improve their language skills. We were very aware of and grateful for those learners—their subscriptions paid our salaries, and we saw ourselves as working for them, trying to improve their learning. It’s humbling to think that our seven hundred or so Berlin-based colleagues were fully funded by our learners. Being aware of the social worlds of our audiences is central to what tech designers do: understanding exactly how users engage with technology, and how we can use that engagement to help them learn.


Clare Foster: Some of the articles in this AI and Society special issue suggest a dystopian future, but the impetus for the seminar series and symposium that inspired it was to see how we could take the social and psychological affordances of digital media and re-purpose them to positive ends.


Geoff Stead: Most digital product development walks exactly this line between dystopian and enabling technologies. It’s a tightrope I’ve been trying to balance on my entire career. I’d say a lot of tech inventors and executives would agree we are in a bit of a negative place at the moment, with fake news and data-driven algorithms powering information chaos. But things are not all bad. There’s no denying that we are all experiencing a wave of change, a total transformation in the way we use tech, the way we let it into our lives and allow it access to our personal information. This wave is relentless—and it is already here. It isn’t something we can stop merely by theorising about it. So as tech designers wanting to contribute to the social good, the question for people like myself and my colleagues is how to make positive use of this wave of change. We can stand there putting our hands out vainly trying to stop the wave, or we can figure out how to harness that power and use it for good. To use a surfing metaphor, we need to ride that wave and use its power positively.

That starts with understanding what powers it. In its current state, the internet economy is largely powered by platforms. Platforms for connecting people, such as Facebook, Instagram or LinkedIn; platforms for exchanging goods, such as Amazon, eBay or Uber; or platforms for accessing and sharing information, such as Google, TikTok or Youtube. Platforms dominate today’s internet world. Many are free to use; and most were originally built with good intent, driven by mechanical principles that are not inherently negative. What creates all the negative news headlines—which reflect a real social problem—is the particular data-sets these principles are creating and consuming, and the way that data is used to generate revenue. Thinking about the revenue model for these socially powered, free-to-use platforms now, it’s worth noting that Facebook’s 60,000 well-paid employees and stylish offices are not paid for by their users, but rather by advertisers paying to accurately target their adverts to exactly those users most likely to buy their product. When you use these ‘free’ platforms, in effect, the platform is being funded by your eyeballs and your attention. The more time you spend on the platform, and the more ads you see, the more money the platform gets. Which is why all these platforms slowly evolve to become better and better at keeping you engaged, and at guessing what material you are more likely to click on. In the early 2010s, Facebook learned that the more personal the data they held about their users, the more successfully they would be able to target ads, and the more they could charge for that. This started an arms race to collect more unique data about each user, to be able to offer ever more effective targeting.

For many of today’s platforms that personal data have become even more valuable that the original target of simply the minutes and seconds of your eyeballs/attention. The more information a platform can collect about you the better. This explains the business model behind things like cheap DNA testing sites. You give them your entire DNA sequence in return for a light-weight analysis. They use that data to build a massive library that can be monetised in new ways. There's nothing inherently wrong in this. You, the end-user, are getting a free or subsidised service in exchange for your data. But technology is about optimisation, and these platforms will optimise for cost effectiveness—in this case for more eyeballs or data even if getting that data are slightly underhand or devious. It happens in small, seemingly innocuous steps, to create unimaginable effects at scale. That's where the tension comes in. Understanding how today’s platforms generate revenue, how they draw our attention and keep it, and how they collect and re-use our data is a critical skill in making sense of the digital world we live in today.

In a way, Babbel had some similarities, in that we also wanted to attract as many users as possible and have them spend more time in our app. But the huge difference in our case was that there was no subterfuge, no advertisers, no monetising of learner data. We went in exactly the opposite direction to Facebook in the early 2010s, moving to a subscriber model where users pay directly for a service (language learning) and in return we didn’t resell their data to anyone. The only person who got additional benefit from a user spending an extra hour on our platform was that user themselves: they got a bit better at speaking Italian, or German, or English. We saw real value in our users’ data, but that value was in learning more about how people learn online, so we could improve how they learn online. We used user data to improve the effectiveness of our product.


Clare Foster: When you were at Qualcomm, were your innovation team tasked with how to invent mobile apps that were more addictive?


Geoff Stead: Well, in a sense yes, although we did not use that term ourselves. Any social media or engagement-powered app on your phone is designed to encourage you to pick it up again, and again, repeatedly—to do one more scroll, or click, or post. A whole field—behavioural economics, nudge theory—has grown up about designing for engagement. This power can be used for good (in our case at Babbel, self-improvement), or bad (like gambling apps). There are many different theories and techniques that go into designing an addictive app, with a growing literature about it. Richard Thaler first comes to mind—a Nobel prize winning expert on ‘nudge’ theory. He has written extensively on behavioural economics and is maybe best known for pointing out that people do not necessarily behave completely rationally.Footnote 1 Another voice is Robert Cialdini, an expert on influencing and persuading.Footnote 2 For Cialdini, persuasion is a science, not an art: his methods are often used in app design. They include core principles like scarcity (disappearing messages), reciprocity (you follow me and I’ll follow you back) or the offer of other desirable but hard-to-get social or psychological benefits, such as authority, consistency, or peer recognition. Then there is Nir Eyal, who wrote the widely regarded ‘Hooked’—his own summary of the factors that drive repetitive digital habits and how these can be used to hook you into a product.Footnote 3

These theories and techniques, now built-in to apps that depend on user engagement, are worth taking some time to understand. Knowing about them helps users build their own defensive strategies—so when you get that message notification on your phone or that pop-up window, you remember you might not want to pay attention to it right away; or when you feel that buzz of your phone in your pocket, you realise you might not want to look at it. Features like this are all part of a quite conscious cycle to draw you in to be more addicted to your apps.


Clare Foster: Is part of the problem that the many of the people inventing these technologies and business models do not think about their wider social impacts because they do not come from a social science or humanities background?


Geoff Stead: Partially, yes. The tech industry needs more diversity across multiple dimensions. It is certainly true that engineers dominate the platform world, and this skews who shapes those product experiences. So the higher the ratio between engineers and social scientists, the more of a techno-bias might appear in the platform. But it’s worth looking a bit deeper at some of those biases. All the tech companies are desperate for engineers, for people who can make tech do things: there is a global pressure on hiring, and developing these skillsets. For example, many tech companies, such as Babbel, based in Berlin, work entirely in English, so they can attract engineers from around the world to work with them. Many of these engineers might come from countries or backgrounds that are more socially conservative, such Eastern Europe, or Russia, or rural cities in India—and they do tend to be younger men, willing to travel for work. This labour fluidity is a great global leveller, but it also means that even in a city such as Berlin, known for its liberal social ethics, it is possible for a tech company to accidentally introduce a more conservative ethos or gender bias to a platform.

But most concern about implicit platform biases comes not from human factors, but from AI. The engineers I mentioned are probably not even making the decisions about how their platform behaves—an algorithm is doing this, based on learned data. And although an algorithm learns by harvesting information from people, it cannot necessarily assess whether or how that information itself might be skewed or biased. Any publicly available training data is very likely to have embedded bias within it, for example, more pictures of white people than black people, or more men than women, or more western than eastern food photos. This produces skewed platform behaviour which in turn produces a disproportionate or skewed vision of the world for its users. It’s a vicious cycle. Understanding how training data drives this feedback loop is key to understanding possible bias.


Clare Foster: So not only is there unintended bias or worldview coming in from the kinds of human engineers making tech decisions, there is also algorithmic bias in the raw data that is their starting point.


Geoff Stead: Massively. And since the biases are buried deep inside huge collections of data they cannot be easily understood or seen by us end-users. We all need to view AI-powered advice with open eyes, and question the decisions it makes—as well as to enthusiastically encourage those trained in the arts, social sciences and humanities to become engineers. Tech companies are trying to engage with ethics and to recruit ethics advisors who can resolve these issues, but it is a tough topic to fully understand.


Clare Foster: So should an understanding of the social influences and impacts of tech be a deliberate part of the design of platforms?


Stead: Yes of course, although if they are a commercial company, their primary responsibility is to their shareholders. That’s how these things work. That’s why understanding where the money comes from for each app you use is important. Bear in mind that we—as consumers—also actively feed some of this dystopian world. We expect to use these platforms for free, which drives their need for your personal data to generate revenue. If we were willing to pay for access instead, this would stop. Another problem is that governments and large companies are enthusiastically investing in AI-powered solutions without critically evaluating the biases that may be hidden inside the data. If these large institutional drivers were more critical of data-driven processes, it would force AI companies to invest in addressing problems of bias more energetically.

There is a growing voice for Ethical AI, which many of the larger platforms (such as Google and Facebook) are finding it hard to fully embrace, while in contrast new startups are taking it quite seriously. If as consumers we selected our social media platforms for their ethical stance, we’d help accelerate the change. But we are mostly too busy or distracted. When you last picked a platform, or when your workplace last licensed a digital tool, did you look for a diverse leadership team in that supplier? If not, why not? Consumers could do more to make sure they see some women, some faces of colour, and even some social scientists at the top table too. We should all encourage and support more diverse communities to become coders and data scientists—and we can all help by engaging with these issues and raising their profile, both in the tech industry and the public in general.


Clare Foster: How much in your product development teams at Babbel did you discuss repetition of various kinds as a characteristic of tech itself?


Geoff Stead: Well, repetition is a characteristic of learning itself, so of course as a language learning app we used techniques of repetition in multiple ways. For example, reading a word, hearing the word, saying a word, then writing the word—there is an implicitly algorithmic approach to how people remember things, how patterns lay themselves down in human neurological processes of thinking and learning. So yes, we used that understanding enthusiastically: such iterative learning techniques are typical of how good digital learning and especially digital language-learning tools work. But this is a specific, very contained, and defined domain. It is repetition for conscious learning, which is a good thing. Your other contributors in this special issue are mostly discussing misinformation and fake news, where you are prompted by multiple touch points that remind you of a message until you come to falsely believe it is true, or that it came from the stated source.


Clare Foster: Thinking about repetition’s essential relationship to learning, it is interesting to be reminded there is an automatic element to it—a built-in unconsciousness that belongs to iteration. So to counter iteration’s effects needs a kind of consciousness-raising. Perhaps this is another reason why we need to promote the idea we are moving from a culture of discovery, or argument, where what matters is what actually exists, to a culture of repetition, where what matters is what gets repeated—what gets attention. Unconsciousness lies at the heart of the novel social processes that are forming.


Geoff Stead: I’m not sure it is as simple as saying that we are moving from a culture of argument to a culture of repetition. Argument and fact are still happening, and at scale—but they are happening in new places, being brokered by new people. The tools and the channels are changing around us, but we are being too slow to adapt the way we critically review and critique them.


Clare Foster: That’s very helpful. It’s been so useful to have your insights as a tech insider. Do you have other summary observations you find yourself often making?


Geoff Stead: Well, first might be the fact that that with great power comes great responsibility. We have amazing new tools and access to information. But it’s not all real. Everyone should critically review their own role in the use and propagation of tech.

Second, if you are not paying, you are not the customer. This may sound obvious, but all those apps are expensive to make and maintain. Understand who is paying for them and why, before believing what they tell you.

Third, the world of tech and AI needs diversity, and the skills of the humanities and social science. As tech becomes unavoidably more central to our lives it needs to be built by everyone, not just your stereotypical tech bros. If you are reading this volume you may be a perfect candidate. Get involved.


Clare Foster: Finally, in terms of policy recommendations or research challenges, do you think solutions lie in user-education, rather than regulation?


Geoff Stead: Definitely both. But user-education can react faster! We have to accept that tech can be addictive—that's the world we live in. As tech-users we must be selective with how and when we use tech. We need to take back control and keep tech in its own place. There are four techniques that can help achieve this, well known among designers of these apps—whose job is to stop you doing them! Basically the four main recommendations for keeping tech in its place are:

  1. 1.

    Take control of your time.

    Think consciously about when you use tech and when you do not. Think about times in your day when perhaps you do not want any tech there. Part of this is about controlling notifications, perhaps switching them off so that the device you are using to create—write, make, whatever—is not also connected to that live world. Think critically about the apps you use, and the value they bring. Think about when to switch your phone off, or put it in flight mode. Give yourself ‘no-interruption time’, when you consciously disconnect.

  2. 2.

    Take control of your space.

    Allocate some space or spaces in your house where you do not have tech. Some people make their bedrooms no-phone zones, but it can also be a room where you read, or talk, or eat. Space where you are in charge, not a device or notification.

  3. 3.

    Take control of the expectations in your relationships.

    Friends and colleagues may expect an instant response when they get in touch. If you work like me you’ll have multiple channels permanently open—Slack, email, Whatsapp, Skype, text—with the implied expectation that you will reply instantly. It’s important not to fall into that pressure trap. Decide for yourself when its critical to reply versus when you can wait till you are ready. Managing these social expectations can be more important than it might seem at first. Many people find themselves over-invested in social media in a way that actually excludes some of their other relationships. If this is you, try detoxing for a month to rebuild those other ones.

  4. 4.

    Take back control of your physical body.

    On a mobile device our eyes are glued to a small screen; on a computer we are sitting. Try to find a non-digital aspect of life to engage with that does neither. Not watching a movie but hiking, or swimming, or meeting friends in the park. Find a source of joy or satisfaction that is not tech driven.

Having said that, hope to see you all online. @geoffstead.