One of the most notable features of participants situated accounts of domestic digital privacy management, and a matter we shall return to in due course, is that privacy as a topic breaks down into a wide variety of different and essentially unrelated local practices once you start to examine what people actually do. Nonetheless, what we want to do in the following materials is articulate an abiding concern that members displayed in heterogeneous ways to manage the potential attack surface of the digital and how this plays out in their everyday lives.
Passwords and privacy
The first thing we note about domestic digital privacy practices is that only one of our participants exhibited the slightest concern about location-based data, and could see ‘no reason’ for their phone to track ‘where I’m going in everyday life’. This is not to say that our participants did not use location-based services, many did of them did, only that location-based services were not seen or treated as problematic. Thus our participants simply turned them off if they didn’t want to use them, and many did so to prolong battery life. Location data was not discussed as something that needed any particular management with regard to what was shared with other people. Indeed, the only time that location was mentioned in regard to privacy was with respect to passwords, and rather surprisingly so:
Paul: I’m not particularly fussed about setting up passwords and things. I mean there’s no threat of network hijacking here. We live in the middle of the countryside, miles away from another house, it’s just not an issue.
As Kaye (2011) points out passwords are, from the perspective of systems design at least, seen as a key privacy mechanism yet it is not at all uncommon for passwords to be shared. Indeed, in our own study we also found that the use of passwords was routinely suspended under the practical auspices of ‘convenience’. As one participant put it with respect to his phone, for example,
I know it’s got my life on it but I look at it about thirty times a day or something and if I’m having do that (mimes punching in PIN number), you know!
This was not the only example of password suspension in our study, particularly with regard to devices that stay within the home such as desktop PCs, tablets and media servers. That users do not religiously employ passwords, and apparently share them with abandon, is a constant source of complaint for security experts. Nonetheless, they are not religiously employed, and are often shared, so we sought to understand on what grounds they are used and/or shared, or not as the case may be, across their networks, devices, accounts, and content.
Unsurprisingly it turns out that the use of passwords is occasioned by a wide variety of practical issues and concerns. Thus some participants told us they used them because their devices ‘asked them’ for one. Often they had ‘no choice’ because they were using devices supplied by the organisations they worked for or devices simply could not be used without them. Others reported using passwords for personalisation, thus logging into a Hotmail or Google account “gets you everything”. Loss or theft was an often-cited reason for using passwords on mobile devices. Here participants felt that passwords might not only protect their data from uncontrolled access but also provide a motive for returning a mobile device to its owner. Managing access to devices and networks was another reason given for using passwords in general. However, passwords were not generally used because it was thought to be ‘good practice’. Only one participant, an IT support engineer, accounted for password use on this basis. For the main part we found that the use of passwords was contingently occasioned by the potential risks that attach to particular cohorts. As one couple put it,
Mike: The PC upstairs occasionally has a password. It usually doesn’t. It’s in the back room. The last time I put a code on was when we had a decorator in to do that room. I’ve done it once or twice when we’ve had guests staying …
Alice: Yeah, when my nephew comes, ‘cause he just logs into everything …
Fieldworker: It kind of depends on the guest?
Fieldworker: ‘cause you see it as a potential risk?
Fieldworker: What would be a potential risk?
Mike: Basically, er, adult material on there. So potential embarrassment I guess. With the decorator guy, it was more the general principle. There’s personal information on there …
Alice: Bank details and stuff, so we don’t want them …
Mike: Yeah. Whereas, if it was like family staying there, it’s more like the scenario where they just use the PC for something and stumble across a folder I’d rather they don’t stumble across.
Passwords are, then, seen and treated not so much as a blanket privacy mechanism but rather as a means of managing specific risks: e.g., the risk of exposing a child to inappropriate content, the risk of causing embarrassment to guest and self alike, the risk of exposing personal and even sensitive data to those who have no business looking at it, etc. This use of passwords to contingently manage foreseeable cohort-dependent risks ran throughout our data, particularly with regard to children.
Kit: On the television we’ve got one. You know, like on Netflix. Because when the children - ‘cause Mary’s sort of nine, she can go and choose what she wants to watch - but there’s certain things, if it’s above a PG, you need a PIN.
Tim: We only did that when she had friends round.
Kit: Yeah, that’s when she had a sleep over.
Tim: Before that, didn’t bother.
Kit: She had some friends over for a sleep over and I thought, oh – ‘cause we know that Mary wouldn’t go on to other things - but we thought, not sure what friends would do. We thought we’d just put that on. Just left it there now. So that’s passworded.
The risk of exposing children to inappropriate content is writ large in participant’s accounts and drives the use of passwords across all manner of networked devices and the concomitant use of managed accounts that restrict access to online content. In this respect then passwords are invoked with respect to parental responsibility and employed methodically as ‘gateway’ devices by parents or guardians, just as they are employed methodically to manage potential risks occasioned by guests or others more removed from the household cohort.
The methodical use of passwords as gateway devices enabling risk management is particularly pronounced when it comes to sharing, which is again contingently occasioned by a broad range of cohort-dependencies. Thus we find that our participants employed passwords alongside a range of risk management strategies within the various accountable relationships they had with ‘children’, ‘partners’, ‘family’, ‘friends’, ‘friends of the kids’, ‘guests’, ‘tradesmen’, ‘clients’, etc. First and foremost amongst these risk management strategies was ‘the front door’. This is not to say that being allowed through the front door warrants password sharing, but that it is an important premise or criteria for making such judgements: if you are not allowed to enter the home the gate is generally barred (though we are aware that network access may occasionally and with good reason be shared with neighbours, see Crabtree et al. 2012). Getting through the gate also turns upon the accountable relationship someone has to the members of the home. Thus, we found that ‘family’, ‘friends’, ‘friends of the kids’ (specifically teenage kids) and ‘baby-sitters’ were routinely given passwords to access networks, devices, and applications whereas ‘tradesmen’ and ‘clients’ of home-workers were not, and that this accountably turned upon ‘trust’.
David: The home you can police through other means – non-digital policing - so the doorway, if you’re in the house you have some kind of trust. If you’re like a tradesman then we might still, like put the password on the PC upstairs, but otherwise everyone is people we know and have some kind of trust with.
This is not to say that various categories of ‘visitor’ were untrustworthy, but that allowing them through the gate was occasioned by different orders of their accountability to relationships of ‘trust’. In cases where visiting was premised upon purely professional criteria we thus found that gateway access was more heavily controlled (with participants entering passwords into their own devices to enable application use) and even monitored (with participants ‘sticking around’ to ensure visitors didn’t do anything they didn’t want them to do).
The front door and ‘trust’ may be sufficient to manage network access with regards to ‘family’, ‘friends’ and ‘guests’, but that does not mean that the members of these cohorts have blanket rights of access. Rather, we find that different gateways are in operation and that access to devices, applications, and content is predicated on cohort relevance. Thus we find, for example, that partners routinely access one another’s personal devices because doing so is relevant to the relationships they have with one another. We find that ‘families’ routinely share passwords to enable members of the cohort to access applications and content, and that this applies to both static and mobile devices, for a wide variety of reasons including entertainment, way-finding, cost, and ease of communication. And we find that ‘household members’ generally share passwords, even passwords to sensitive data, to collaboratively handle a contingent array of domestic matters.
Sam: Liam knows some of my bank stuff, because I have to get him to buy things from time to time. He knows the PIN code for several of my bank accounts.
Fieldworker: So he’s of an age where you trust him with that?
Sam: Yeah, well because he’s got his own bank account and is competent in using it I figure he’s going to understand how to use mine.
Whether paying for goods and services for others, or posting items online for others, or sending emails on behalf of others, etc., the demands of domestic life routinely occasion password sharing. It is not done blindly, however, but on a cohort-relevant basis, which further enables selective gateway access and the concomitant management of risk.
That people share passwords with one another, particularly for devices, applications and content, does not mean that anything goes.
Joe: My wife might use my phone if it’s handy, or I might use hers, you know. It’s not a big deal for us. But my daughter [who is 17] has got a PIN number on hers, and I think my son [who is 21] has as well. He’s got his locked.
Fieldworker: You don’t know the PINs?
Joe: No, no. They have all their feeds coming in, Snapchat and Twitter and god knows what.
Fieldworker: Liam and Erin [late-teens and early twenties respectively], you wouldn’t know their passwords?
Carrie: No. We consider their stuff as private. We don’t need to nose in.
As Joe and Carrie make perspicuous, people employ cohort-relevant access controls that may be driven by a prima facie concern with ‘privacy’, as in the above example where one’s children are concerned, but are governed more generally by accountable expectations of appropriate relationship-relevant behaviour. Thus we find that while partners may routinely access one another’s personal devices, they do not necessarily know their children’s passwords (which very much depends on their age and the expectations that go along with that), and neither do they necessarily share passwords for various application accounts. It is not that they are being ‘private’ - as partner after partner told us they have ‘nothing to hide’ – it is that what is done is not relevant, and is seen as not relevant by both parties, and that accessing it would therefore be inappropriate.
Gene: You know I haven’t got anything to hide, and I don’t think my wife does so we’re kind of fairly open. I wouldn’t mind if she read my messages, you know, we’re not hugely secretive. We try to be open with each other.
Fieldworker: But you’re not actually trawling through one another’s mail either?
Gene: No. There’s an etiquette I suppose, and most email’s pretty dull anyway isn’t it. I wouldn’t look at my wife’s email and social media.
Cohort-relevance underpins sharing passwords, and not sharing them. Whether ‘partners’ or ‘parents and children’ or various categories of ‘other’ entering the home, cohort-relevance is determined by a host of accountable expectations regarding relationship-relevant behaviour. These expectations shape selective gateway access to networks, devices, applications and content and thus enable people to manage a contingent array of cohort-dependent risks that accompany interaction in a networked world. It might thus be said that in devising fine-grained methods of gateway control our participants minimise the potential ‘attack surface’ of the digital on their everyday lives and thus manage the potentially malicious or unintended consequences of interaction in their networked world. It is notable that these methods are not wholly devised to manage ‘attacks’ on their privacy. Indeed, ‘privacy’ was only occasionally invoked to account for the use of these methods, and even then it frequently glossed a range of alternative concerns: child safety, good parenting, avoiding embarrassment, doing things for others, behaving appropriately, being a good host, etc. It would thus appear that gateway management is wrapped up in a locally contingent array of mundane concerns involved in the conduct of interpersonal relationships, and that it is attack on the accountable conduct of these relationships that our participants seek to minimise if not prevent entirely. This methodological concern with relationship management is also evident in our participants’ management of digital content.
Digital content and privacy
Most of the households we spoke to stored a wide range of personal content. This included records of passwords stored in various format: some used a personal code or mnemonic, and a few kept digital records that were encrypted, but most used physical formats (hand written notes) and stored them in a variety of personal locations that are typically hard for outsiders to access. Content also included financial records of all kinds, an array of ‘important’ documents (insurance certificates, scans of passports, national insurance numbers, television licenses, receipts, work-in-progress, etc.), family videos and photographs, and occasionally for a few, activity data generated by smartphones and wearable devices. These data were distributed across various devices and servers. The privacy of financial records, particularly bank details, was of common concern across our participants and these were typically stored locally, rather than online. However, a great many important documents were stored ‘out there’ on email servers, as this is the mechanism whereby many such documents are delivered, and on online solutions (OneDrive, Google Drive and Dropbox were frequently mentioned).
Joe: It’s a whole archive of my photographs and stuff that I’m entrusting to Microsoft.
Fieldworker: Trust is the key word there.
Joe: Yeah, it’s just trust. Purely that really. But what do you do with it though? Do you download a copy onto a hard drive and stick it in a safe somewhere in your house, you know? And how do you manage then to keep updating that?
Joe: I suppose if something horrible did happen, like Microsoft wiped all my data, how would that affect my life? Ultimately it probably wouldn’t really.
Fieldworker: So you reason about the risk?
Joe: Sometimes. I know there’s risk there and I know I’m placing a lot of trust in these big companies, but then who do you place your trust in? If the government, like the inland revenue, said, oh we’ve got a secure vault now, we can store your data, would you trust them more than you trust Microsoft? They could pass your information on to the security services.
Fieldworker: Sounds like you’d trust them less?
Joe: I think I would trust the government less to secure my data. Companies like Microsoft and Google have got their reputation, haven’t they, and that’s what their income’s based on. So if they break trust with millions of people around the world then that’s really going to affect them. So I suppose there’s that incentive for them.
Joe’s account, which is by no means unusual, makes it perspicuous in the first instance that the use of online solutions provides a practical way of managing collections of personal data. It is visible too that in using online solutions, people are not unaware of potential risks in putting personal data out there, but that these are mitigated by ‘trust’. Furthermore we can see that ‘trust’ is not groundless, but predicated on providers having a reputational and financial incentive to keep personal data secure. And we can see too that people do not blindly put things out there – Joe may have ‘a whole archive of stuff’ online but it is not stuff that if lost in some way would ‘ultimately affect his life’.
Putting personal data online is a considered act then, as can be seen in the sharing of family photos and videos, which our participants reported being the most ‘private’ category of personal data.
Fieldworker: Who has access to the videos on Vimeo?
Kit: It’s just family, isn’t it?
Tim: Yeah, family.
Kit: They tend to be on holiday, it’s not like there’s anything - if the kids were, say, running around in the nude on the beach, then I wouldn’t like it.
Kit: But I don’t think there’s anything like that really.
Tim: No. You do a sort of risk assessment don’t you? You know, how uncomfortable would I feel about that? I would feel uncomfortable if over the long term we were shoving pictures up of our kids. You’re sort of relinquishing control that they ought to have over keeping that private if they want to. You know, their history online, public, and they can’t get away from it. It’s sort of incumbent on us to be responsible enough to say, you know, they should have that choice. You kind of owe it to them to be a bit more responsible, rather than shoving everything online. But at the same time I don’t have any problems with the odd video or photo here and there.
As Kit and Tim make visible, consideration of what to put online turns upon ‘assessing’ its potential impact not only on self, as elaborated above by Joe, but on others. In this particular case we can see that in assessing whether or not to put family photos or videos online, ‘privacy’ is invoked with respect to parental responsibility and the foreseeable need to allow children to exercise their autonomy. More generally, we found that our participants routinely carried out impact assessments with respect to themselves and others, and this was particularly pronounced with respect to personal content posted on social media.
In saying that people routinely carry out impact assessments when putting personal content online we are not suggesting that they administer a formal procedure as defined, for example, by data protection bodies (e.g. ICO 2014). Rather, the impact of putting personal data online is assessed through a wide variety of ‘members’ methods’ (Button et al. 2015), glossed by accounts such as ‘would it ultimately affect my life’, ‘how uncomfortable would you feel’, ‘it be could quite embarrassing’, etc. The methodical application of reasoned judgements such as these inhabited our participant’s accounts about posting personal content on social media, and were complemented by discrete impact management practices centred on the use of multiple social media channels.
Alice: I use Facebook and WhatsApp, BabyCentre - interestingly that’s they only thing that I do anonymously. I don’t do it under my own name, because originally we were having trouble conceiving. I was a having a whole conversation about fertility problems I didn’t really want to have under my own name, and it’s still not really something I want associated with my name in terms of work or anything. I don’t want these connected. I don’t need that to be the thing that people get when I’m going to a job interview or whatever. At work I have an account for the council and one for the police and I don’t want those two to get jumbled up either. So I have me at home, me at work, and me at work when I’m doing stuff with the police, and I try and keep them all separate. I try quite hard to keep these things separate.
Alice’s comments encapsulate common practice amongst our participants, which sees them using multiple social media channels, and not infrequently anonymous social media channels where sensitive data is concerned, to enable the ‘separation’ of different cohorts, thereby limiting the potential impact of posting personal data online on the self. At the same time, and reflexively, the use of multiple social media channels enables the relationship-based tailoring of personal content. As Paul puts it,
Paul: I’ve got Facebook and I’ve got Twitter. I have a network of friends on Facebook that includes my family, some colleagues, things like that. On Twitter, even though I use my proper name, I don’t follow anybody that I know personally. I quite explicitly avoid connections with especially work colleagues on Twitter. I don’t follow any of them and I don’t want that link to be made, because I want to be able to behave in a different way on Twitter.
Thus we find in case after case that social media channels are exploited as relationship-relevant channels, though we note that there is no stability in choice of channels (e.g., that Twitter is used for a certain kind of cohort and Facebook for another). It is not simply the case that different channels are used for different purposes either, but that different channels are tied to different cohorts and the particular kinds of relationship that hold between their members.
The methodical ‘separation’ and ‘tying’ of cohorts to specific channels to manage the potential impact of posting personal content online also involves actively managing ‘follower’ relationships to maintain cohort separation and turns upon taken for granted expectations of data sharing. Thus, and for example, participants may use Facebook to post ‘public facing’ content to a broad cohort of followers, and use WhatsApp to post much more ‘personal’ content for a select few. In such circumstances data is shared on the basis of an assumed right of disclosure, which is taken to be commonly understood by recipients and further limits the potential impact of posting personal data online. Not that this always works.
Michel: One of the reasons why Carrie is not so sensitive about posting family photos on Facebook is because pretty well the only network who get to see that are family and friends. Whereas with me, the network who can actually see that includes work colleagues, some of whom I don’t even know very well even. I mean, we’ve had photos of me in fancy dress for instance on Facebook and it’s become clear that other people have had access to those things!
Fieldworker: So it’s other people’s stuff that you’re in and they’ve put up?
Michel: It’s never stuff that I share myself, no, ‘cause I don’t do that kind of stuff.
Carrie: I do, of fancy dress (laughs). Have you seen that one (Carrie holds up her iPad to Paul, and then turns it to show the fieldworker).
Fieldworker: (Laughs at photo of them both in fancy dress).
Carrie: It’s stuff like that he doesn’t want me to put on.
Michel: This is the problem for me. I can control it all I like myself, but I have no control over what other people do.
It is tempting to see in Michel’s lack of control a violation of privacy at work but what actually concerns our participants in cases like these, and drives the separation of cohorts, is the accountability of their actions.
Sylvie: I tried for a while having people graded by their friendship status. So I’d have like real true friends, and then I had my work friends, who would ask me to be their friend but I felt kind of like socially awkward saying no to on Facebook, so I had them as acquaintances. It got really confusing. You know, someone might graduate from being an acquaintance to an actual friend but they still work with you, and then they come into work and say “oh I saw that picture of you at the park, it was really cute” and everyone else goes “what picture? I didn’t see that on Facebook.” So, I’ve given up on that. It just got really hard.
Whether occasioned by someone posting something personal about you online or you posting it yourself, it is not privacy per se that concerns people, but that they can be and occasionally are called to account for their actions by persons to whom they would rather not be accountable. Thus, and for example, Michel has in the past been called to account for wearing fancy dress by people he ‘doesn’t know very well’, just as Sylvie’s selective disclosure of photographs with Facebook friends led to her relationship with colleagues being called into account. In either case, and many more in our study, it was the inappropriateness of having to account for things said and done to people for whom they are of no business, with concomitant ‘uncomfortable’ and even ‘embarrassing’ affects, that concerned them. For many of the younger participants in our study the management of accountability through the management of follower relationships was especially bound up with concerns about not having to account for their online activity to the people they knew the best in everyday life, such as friends and family.
Fieldworker: And what kinds of people are following you [on Tumblr]?
Evelyn: Erm, very few people who know me in real life. Very, very few. There’s literally only Lionel and Tom [her brothers] I think who actually know me in real life. That’s probably why my - I’m more comfortable being more personal there ‘cause there’s less people I actually know personally.
Fieldworker: You’re sharing with people you don’t mind seeing that information obviously. If you got requests to follow you on Tumblr from people that you know in the real world outside of Lionel and Tom, how would you feel about that?
Evelyn: I would feel more uncomfortable certainly.
Cohort separation and channel tying is driven by the need to limit the impact of the digital on the accountability of persons and their actions, whether it is done to limit one’s own accountability (e.g., Michel’s or Sylvie’s or Evelyn’s) or others (e.g., Kit and Tim’s children’s). We thus find that our participants actively constrain the availability of personal data, not only in terms of restricting its distribution through the ad hoc selection of relationship-relevant channels, but also in terms of the temporal durability of data and/or the ability to delete personal content, which may drive the selection and use of specific channels for specific cohorts (both Snapchat and Twitter were frequently cited in respect of these issues). Where and when the need for ‘privacy’ does enter the equation then we find that our participants manage it one of two fundamental ways. Firstly by ‘channel switching’ and moving from written to oral media in particular (e.g., switching from Twitter to Skype), and secondly by simply not putting such materials online in the first place.
Sarah: Obviously I’m pregnant at the moment, but otherwise I had this [Fitbit app] to try and lose some weight and I didn’t really want people knowing, you know, to judge somehow how many more calories I was burning than them because I was so much more heavier than they are. I’d rather keep that myself really. Some people link it with their Facebook but my ideal nightmare would for that to be on Facebook saying, oh Sarah did this today or she’s lost two pounds or whatever. That for me is very separate information that don’t want really want to share with people. So I limit that.
Carrie: If I’m not happy to share it then it doesn’t go anywhere.
Michel: We share stuff about health in terms of weight and steps and things like that. We talk to each another, you know. Verbal communication suffices for sharing that kind of data, positive and negative.
Alice: I wouldn’t put anything on that I wasn’t happy for anybody to see. Managing real private stuff is – stuff shouldn’t exist, that’s the level of it. It doesn’t get written down. It doesn’t get put in a photo. It doesn’t exist. Definitely do not put online.
Just as we find that our participants have devised and use fine-grained methods for handling gateway control, thereby minimising the potential ‘attack surface’ for malicious or unintentionally damaging interaction in their networked world, so we also find that they have devised and use fine-grained methods for minimising the potential ‘attack surface’ on their personal data. They do not, then, put personal content online blindly, whether it be for purposes of managing collections of personal data or for purposes of sharing personal material with others, but through considered judgements where they assess the potential impact this might have on self and others. The methodical management of potential impact is particularly pronounced with respect to the distribution of personal content via social media. Here we find that our participants routinely employ social media channels as relationship-relevant channels to effect separations between the different cohorts they engage with. In tying different cohorts to different channels our participants effectively put people and personal content in different ‘buckets’ or ‘silos’, thereby limiting the potential ‘attack surface’ on their personal content, and with it themselves and implicated others. Again, it is notable that these methods are not wholly devised to manage ‘attacks’ on privacy. Indeed, ‘privacy’ was only occasionally invoked to account for the use of these methods. When ‘privacy’ was used it frequently glossed the primordial concern our participants have with the accountability of persons and their actions and the concomitant imperative to avoid the unpleasant effects of inappropriate disclosure. So again when we look at domestic privacy practices we find it glosses an array of relationship management practices, and this is also evident in our participant’s mundane interactions with the online world at large.
Online interaction and privacy
Our participants were keenly aware that their interactions with the online world at large had personal consequences, particularly an increasing amount of targeted advertising based on their Internet activities (browsing, shopping, downloading, etc.). Many experienced the increasing flow of adverts resulting from their online interactions as a ‘bombardment’ and ‘nuisance’, but for others it was occasionally more personal than that:
Pat: It’s just a nuisance. It’s yet another window that’s in your face.
Sara: There’s one thing that worried me though. Do you remember that time – my family’s Jewish, and my uncle sometimes posts things, just once or twice, about searching for family in the Ukraine and stuff, and I was starting to find a shop selling everything Jewish coming up advertising on my page. So they’ve obviously made a connection that somewhere in the family there is somebody Jewish, and they’ve advertised that to me so that means obviously that it’s visible to somebody. It makes you very aware that people are watching what you’re doing. It’s like I was explaining to Hannah (teenage daughter) the other day. She was getting ads for pregnancy tests and she says, why am I getting this stuff. I said it’s targeted because you’re a teenage girl. And she said, but I’ve never gone on any site like that, I’ve never looked at anything. I said it doesn’t matter, they can tell by the type of sites that you do go on to – they can put you within an age group and sex group and so you’re targeted. She really doesn’t understand that even so. She says I go on gaming sites, I could be a boy. Yeah, you could, but even so the indications that you give are a flag to somebody.
Whether motivated by sheer ‘irritation’ or deeper concerns, such as the potential discriminatory consequences of being ‘tagged’ as Jewish, our participants adopted a variety of methods for managing interaction with the online world at large.
Thus we found that people routinely employed ‘throwaway’ email addresses (some in their own names, some not) to control the bombardment, the use of which turns on discerning the potential impact of handing over contact details when signing up to online services.
Joe: I’ve got a Gmail account, which I’ll occasionally give out to something that I know might generate spam or something.
Sarah: If I feel like it’s one of those where it’s like constantly gonna be, oh remember this or have you seen our latest sale or whatever, I’ll pick this old old email address that I don’t use for much else.
Lennie: I’ve got an old one that I don’t use for anything apart from signing up for things I know it’s going to tell somebody something. If it’s a service I know that at some point is going to sell that data onto somebody else, that’s the address it’s going to get.
Then, of course, we found the widespread use of ad blockers. Again, this was largely motivated by ‘irritation’ and ‘annoyance’ and the ‘convenience’ ad blockers provide in terms of smoothing out the ‘disruption’ to online interaction caused by a constant stream of ‘pop-ups’. However, some of the participants were also motivated by the ‘intrusive’ nature of online advertising:
Joe: If I browse something on Google - you know, when you’re properly logged in on the browser - then I’ll find if I’m looking at Facebook on my phone that these targeted adverts pop up, which are related to what I was browsing earlier on Amazon or Google or somewhere like that, and they’re appearing in the Facebook feed.
Fieldworker: Does that feel like an intrusion?
Joe: Yeah, it does a bit.
Participants reported using ‘whitelists’ to manage intrusion, with one (our IT support engineer) even implementing these at router level. In addition to throwaway emails, ad blockers and whitelists, we found that our participants were also managing the impact of the online world at large on everyday life by turning to privacy-preserving search engines, such as Startpage and Duck Duck Go, to reduce the number of ads they were being bombarded with.
We found too that our participants attempted to control the flow of personal data in the online world at large by ‘ticking’ or ‘unticking’ checkboxes to constrain the sharing of personal details, and by managing cookies. This included judging when to accept cookies and what the consequences of doing so might amount to.
Christine: I don’t always accept cookies. I accept cookies if I know I’m really wanting to get into this thing, but if I’m just skimming through something and they ask me that, then forget it.
Brian: I guess you if you want actually to go and buy …
Christine: Yeah, yeah.
Brian: Then you accept, or I accept, cookies.
Brian: As soon as you accept cookies obviously then they have your, you know, your patterns.
Cookies were not accepted blindly then, but turned upon the relevance of doing so to our participant’s activities. Thus, ‘just skimming’ through things for example did not warrant accepting cookies, whereas ‘buying something’ did, with the concomitant knowledge that in doing so one was making one’s ‘patterns’ of behaviour visible to third parties.
This visibility of behaviour patterns was not simply accepted as a ‘cost’ of being an inhabitant of the online world at large, however. Instead we found the widespread use of private browsing modes and the routine ‘dumping’ of caches by our participants to manage the uncertainties inherent in third parties having access to personal information.
Lewis: The browsers are configured to dump the cache when you close them. Wherever I can disable tracking I will do.
Fieldworker: So what motivates that then?
Lewis: It’s not knowing how third parties manage that information. If I don’t leave it on no one else can find it. So I took the decision to prevent them from being able to find it by removing it. So whenever you close the browser it will wipe your history and cache, and if you’ve not closed the browser properly or if the machine’s been hibernated rather than shut down I’ll go in once a week and clear the cache manually.
Clearing browser caches was commonplace amongst many of our participants, whether it was done through the use of plugins on a daily basis or manually on the basis of various contingencies (e.g., using credit cards online or doing routine digital housekeeping). In either case, clearing caches provided our participants with a means of reducing the visibility of online behaviour and thereby managing the potential impact of third party intrusions on everyday life.
It is also notable that the clearing of caches was also done to manage the visibility of online behaviour to those much closer to home.
Fieldworker: So do you clear caches, cookies or search histories?
Kit: The only time I’ve done it is when it’s like Tim’s birthday and I try to do things secretly so he doesn’t know. I put private browsing on and I – I’ve asked him before and he told me how to empty things.
Tim: Clear the cache, yeah. Yeah the only other times I could see mild embarrassment is if you’ve gone out and I’ve got Netflix to myself and then I’ll be like, right, good car chase film – when do I ever get to watch good car chase films? But then obviously it comes up, doesn’t it, you know, like next time you go on Netflix, you’ve been watching …
Kit: Oh! Hmmm.
Tim: So you can log onto Netflix and delete these things.
Fieldworker: And do you?
Tim: No I don’t actually. Well, if I did, I wouldn’t tell you, but I don’t. But I definitely wouldn’t answer that honestly if I did.
So, whether occasioned by someone’s birthday, or watching car chase films, or a host of other prosaic matters, caches were also routinely cleared to render online behaviour invisible, and thus unaccountable, to others within the home.
Again we can see that our participants employ fine-grained methods for reducing the potential ‘attack surface’, this time of the online world at large on everyday life. These methods exploit a range of technological mechanisms (throwaway email accounts, ad blockers, whitelists, privacy-preserving browsers, cookies, consent forms, cache clearing, and private browsing) and are employed to reduce ‘irritating’, ‘annoying’, ‘disruptive’, and occasionally disturbing ‘intrusions’. The methodical use of these mechanisms sees our participants working to manage the flow of personal data and the visibility of online behaviour in order to constrain what third parties can see and thus come to know about participant’s online behaviour, which in turn minimises unwarranted intrusions. We can also see that some of these mechanisms, particularly private browsing and cache clearing, are methodically employed to render online interaction invisible to others within the home. Once again, then, we find that these methods are not about privacy per se, but about accountability. Thus we find that our participants occasionally mask their online actions to reduce if not prevent those actions being called into account by those they live with, and do so for a host of accountable reasons implicated in the day-to-day conduct of interpersonal and indeed intimate relationships (surprising others, indulging in personal pleasures, etc.). We find too that a concern with accountability and relationship management also underpins our participant’s efforts to avoid constant ‘bombardment’ by the online world at large. In short, the online world at large is not one that our participant’s want to have an accountable relationship with, other than as an occasioned matter, e.g., when buying goods. However, even then they seek to constrain what can be seen and known about their online actions. Thus they work, and work methodically, to reduce the potential attack surface of the online world at large on their everyday lives.