Introduction

Some think that technology can disrupt our concepts (see, e.g., Löhr, 2023) or that certain normative concerns can motivate a need to change or ameliorate a given concept (see, e.g., Haslanger, 2000). This raises the question of whether we can create, design, engineer, or define concepts that are sufficiently robust so as to avoid a broad set of counterexamples and conceptual challenges that normally can lead to conceptual disruption (i.e., whether we can achieve what we can call undisruptable or stable concepts).Footnote 1 In this paper, I aim to argue that such stable concepts are possible and that we can create them.

I will do so based on a case study of a proposed definition of information security—the so-called “Appropriate Access” definition, which aims to define the state in which information (systems) are secure (Lundgren & Möller, 2019). As I will argue, it is the recognition—in Lundgren and Möller (2019)—of security as a stakeholder-relative, value-laden concept, in which the definition is partially separated from its application, that allows the Appropriate Access definition to avoid counterexamples and challenges.

That the Appropriate Access definition has a value component is also important for the generalizability of the case study since some formal definitions (such as those in the mathematical domain) can avoid the type of challenges I am concerned with here (without making use of any value-component). Related to the generalizability of the case study, it is important to note that it does not depend on Möller and I being correct in our analysis (mutatis mutandis, for other formulations). What I am evaluating is the idea that if the Appropriate Access definition is sensible in a given context, can it then, as Möller and I claim, remain stable over time? That is, I am not testing the fundamental analysis of security; I am testing whether the analysis can avoid counterexamples and conceptual challenges, given that it is a sensible analysis.

In the following section, I will provide a brief background about conceptual disruption, conceptual challenges (including counterexamples), and the theoretical virtues of undisruptable, or stable, concepts. Next, in the third section, I will introduce the Appropriate Access definition. After that, in the fourth section, I will turn to evaluate the definition’s ability to withstand counterexamples and certain conceptual challenges. Based on the case study, I will then, in the fifth section, briefly indicate how the ideas that I’ve discussed in previous sections can be generalized. Finally, in the sixth section, I will end the article with some concluding comments.

Lastly, two final comments related to the contours of my arguments. First, I will avoid any claims about different forms of “analysis” in many of the technical senses that are popular today. I believe that my findings are relevant regardless of whether one’s aim is amelioration, engineering, explication, stipulation, replacement, or something else. Second, the argument arguably holds, mutatis mutandis, if we instead of stable concepts talk of stable definitions (judging their success by some set of normative criteria, cf. Belnap, 1993).

Conceptual challenges and conceptual stability

As just mentioned, this section aims to provide some background explanations about conceptual disruption, conceptual challenges, and counterexamples, and the virtue of conceptual stability. To do so, it is helpful to briefly contextualize this discussion in the current debate on conceptual engineering.

To some extent, conceptual engineering is a new name for old ideas. That is, the basic aim of improving concepts is familiar from earlier periods in modern philosophy through the concept of explication (Carnap, 1950), but it has arguably always been part of the philosophical endeavor. If the conceptual engineering agenda can differentiate itself from older ideas, it is through its more radical aim of not only improving concepts through conceptual revision but also through conceptual replacement. While the former aims at the more moderate goal of revising a concept, the latter aims at replacing old concepts with new more suitable ones.Footnote 2

The conceptual engineering agenda and closely related discussions will be relevant for the upcoming discussion since I will take ideas from that debate to evaluate the Appropriate Access definition. That is, I will ask if the definition can be challenged based on ethical or epistemic considerations that have been considered in the recent literature.

Tangential to the conceptual engineering debate, there has also been a recent flurry of examples of how new technology creates new forms of challenges for various concepts (see Löhr, 2023, for several interesting examples). For example, in the robotic literature, researchers have asked whether friendship can extend to human-robot-relationships (Danaher, 2019); if a robot can be good a colleague (Nyholm & Smids, 2020); and if a robot can be a citizen or acquire rights normally associated with personhood (Rainey, 2016; Coeckelbergh, 2010). These ideas challenge the standard ideas of these concepts. For example, friendship is standardly not considered a relation that can hold between man and machine (i.e., there is a reason why we use the term imaginary friend to describe situations when children have a friend that is simply pretend).

Of course, for every example, we should ask whether the concept is really challenged or whether someone is confused about whether the concept is challenged. That is, we can sometimes be wrong about whether a supposed counterexample provides a genuine challenge for a given concept. However, that confusion also motivates the desire to create concepts that avoid challenges (even if we could also be confused about that as well, it seems at least plausible that conceptual stability can minimize the confusion; although that is an empirical claim that I cannot support in this paper).

So far, I have spoken of conceptual challenges, which can be broadly understood as any form of challenge against a concept. Conceptual disruption is a narrower concept, a strong form of a challenge. According to Löhr (2022), conceptual disruption is any “challenge or interruption of the ways in which the individual or group has intuitively classified individuals, properties, actions, situations, or events, leading to classificatory uncertainty, i.e., uncertainty about the application conditions of a word or concept” (p. 838). Since my aim here is not to discuss these fundamental concepts, I will simply accept the broad contours of Löhr’s proposed definition (but see Marchiori & Scharp, 2024, for a critique).

What is important for the present paper is the idea that there can be challenges that are so severe that they lead to classificatory uncertainty. That is important since our ability to navigate and understand the world and our place in it arguably depends on concepts. As such, conceptual disruptions provide a challenge to our ability to navigate the world and understand it and our place in it. If we can avoid any conceptual disruption, then that has clear benefits. If stable (or undisruptable) concepts are possible, then they provide a means against classificatory uncertainty. Moreover, if the task of philosophers and academics is partly aimed at providing and improving classifications for academia, or elsewhere, then a stable concept is ceteris parisbus better than a disruptable concept, given that it avoids the need for future conceptual work on that concept. The benefits of a stable concept can also be more technical since it may allow a simpler form of comparison between contexts that otherwise would raise classificatory challenges.

It is important to realize that the risk of conceptual challenges and disruption is not limited to rare special cases. This can easily be illustrated through the concept of socially disruptive technologies—that is, technologies that yield social disruption, part of which can be conceptual (see, e.g., Hopster, 2021 for a framework of different ways in which socially disruptive technologies can create disruption). Modern, socially disruptive technologies include technology and systems based on artificial intelligence and big data, biotechnology such as CRISPR/Cas9 (simplified, a scissor for cutting DNA), or geoengineering. Historical examples of socially disruptive technologies include radical developments in transportation and communication such as cars or airplanes, the telephone, and the printing press. These are not rare technologies and the risk of conceptual challenges and disruption strongly associated with socially disruptive technologies—as the previous examples from the robotic literature illustrated—makes the desire to avoid such challenges pressing.

Of course, one may recognize that conceptual disruption (just like conceptual change and replacement), can be positive because it shows the deficiency of our current conceptions. However, if we can have stable concepts, then there are no deficiencies to reveal (granted that the analysis that underpins each concept is good).Footnote 3

To simplify the upcoming discussion, I will focus on a broad set of prime facie possible counterexamples and associated conceptual challenges. Counterexamples are traditionally understood as relating to the extension and intension of the concept. Similar to traditional counterexamples, a concept can also be challenged on normative grounds (in the sense that we question the way a certain concept is used or currently functions) or be criticized because of classificatory uncertainty (i.e., in a case where a concept cannot provide a clear answer). The question I will address is whether we can design concepts that can avoid any kind of relevant counterexamples or challenges as broadly understood above. Hence, I am using “undisruptable” and “stable” in a fairly technical sense, given that a concept can be disrupted based on purely bad reasons. I set bad reasons aside and focus on conceptual disruptions that would be taken seriously by a boundedly rational agent. If we can create stable concepts, that is, concepts that avoid counterexamples and challenges that a bounded rational agent would, or should, take seriously, then we can avoid future conceptual work. That is, we can avoid the need to re-design concepts because of future developments such as norm changes or new technologies. This of course means that there is a set of empirical situations in which concepts are disrupted but fall outside of the scope of what my focus is on here—that is, because they are disrupted for reasons that a boundedly rational agent would and should not take seriously. I hope this makes it clear that I am talking of a form of stability that would be of interest to those who work on developing concepts that function, at least partly, as a term of art. Although philosophers sometimes think of themselves as answering the question of what something is, I take it that, for example, ethicists are also interested in concepts that serve a role in ethical analyses (broadly construed).

The appropriate access definition

In this section, I aim to explain the basis of the Appropriate Access definition. To do so, I will start with presenting a critique of the most established form of definition of information security: the so-called CIA definition. As noted in Lundgren and Möller (2019), this definition dominates not only in computer science and other relevant domains of academia, but we can also find it in law (e.g., in US federal law), in standards (such as ISO, or NIST). Despite the broad occurrence of the CIA definition, the way it is specified varies in the literature, so it may be more appropriate to speak of CIA definitions. Nevertheless, what they all have in common is that they define the secure state of information in terms of the satisfaction of three properties forming the CIA triad: confidentiality, availability, and integrity. To bring clarity to some divergencies in the literature, Möller and I proposed a formally correct definition that can be used for evaluating the CIA definitions:

The CIA definition of secure information: some information I is secure if, and only if, all parts of I retain the properties of confidentiality, integrity, and availability. (Lundgren & Möller, 2019, p. 422)

With that in place, we can now turn to explain the basis of the critique of the CIA definition(s). For the present purpose, I need not explain all the arguments from Lundgren and Möller (2019) in detail. It will suffice to give a brief example.Footnote 4 Hence, I will present the arguments against the necessity of only one of the properties (availability), which in the 2019-paper was defined (using the definition from ISO 27000 standard, 2016, p. 3) as the “property of being accessible and usable upon demand by an authorized entity”.Footnote 5

The counterexample, from Lundgren and Möller (2019), to availability as a necessary criterion is the usage of timelocks. Timelocks are devices that make objects inaccessible for a pre-defined period of time. Hence, timelocks violate availability. However, the whole idea of timelocks is that they provide security by making objects unavailable. This shows that availability—as defined—cannot be a necessary property. We can conclude that whether unavailability will be acceptable—and if so, to what degree—will depend on the demands set for the information (system). Based on this kind of contextual reasoning and other examples, it is concluded that information security must be defined contextually and relativized to a stakeholder. That is, the analysis from Lundgren and Möller (2019) is arguably an analysis of information security for a given stakeholder.Footnote 6

The analysis starts with a general definition of security (in all quotes “AA” is used as an abbreviation for the Appropriate Access definition):

AA (general): The object O is secure for stakeholder H if, and only if: For every agent A, and every part P of O, A has just the appropriate access to P relative to H. (Lundgren & Möller, 2019, p. 428)

Which is then apply to information:

AA (information): The information I is secure for stakeholder H if, and only if: For every agent A, and every part P of I, A has just the appropriate access to P relative to H. (ibid.)

And, finally, to information systems:

AA (information system): An information system S is secure for stakeholder H if, and only if: For every agent A, and every part P of S, A has just the appropriate access to P relative to H. (ibid., p. 429)

There are two things about the Appropriate Access definition that any philosopher would notice. First, as mentioned in the Introduction, it is value-laden (given the usage of the concept of appropriateness). In fact, each of the Appropriate Access definitions defines a so-called thick concept (i.e., a concept with both normative and descriptive elements). Second, these definitions are on another abstraction level than the abstraction level used in the CIA definition. In, Lundgren and Möller (2019), it is argued that this level of abstraction is necessary to avoid counterexamples. However, this raises the question of whether the definition is practically useful. In response to that challenge, Möller and I argue that the definition must be applied in a given context.Footnote 7

Simply put, one must look at the needs of the stakeholder to see which demands their needs imply. That is, it is the stakeholder’s needs that make clear—in the application—what access is appropriate. In many cases, this may turn out to be close to satisfying the CIA triad. Applying the definition in context is a two-step process: First, we make the needs of the given stakeholder (H) clear. Second, we set up criteria for when access for every agent is just appropriate, relative to the needs of H. A stakeholder (H) can be understood both as an individual and as an organization (the latter may yield contradictory demands, something I will discuss later).

This explains, or so it is argued in Lundgren and Möller (2019), how the

AA definition manages to deal with the time-lock, and other related counter-examples, because if security is retained when information is time-locked it is because this is in line with the stakeholder’s needs (i.e. it is part of what is just the appropriate access relative to that stakeholder). (p. 439).

Before I turn to the counterexamples that will be discussed in this paper, two further clarifications are needed. First, despite there being more than one Appropriate Access definition, I will, for simplicity, refer to them in singular form (as they could all be combined in one formulation). Second, since the Appropriate Access definition states that some information (system) is secure only in cases of just the appropriate access, one may take this to indicate that there is only one precise access condition that satisfies this for each agent. However, although it is an open question whether, and if so to what degree, access can vary while remaining just appropriate, it is fully compatible with the definition that just appropriate access can be understood as ranging over a set of access conditions, for each agent, rather than only one condition.

Evaluating the Appropriate Access definition

In this section, I will evaluate the Appropriate Access definition to see if it remains stable against potential counterexamples and challenges (as previously characterized). I will address various normative issues related to bias (in the first subsection), epistemic critiques (in second subsection), and technological counterexamples (in third subsection), and, finally, I will turn to two more technical issues. The first is the issue of whether the analysis implied by the definition is empty (in the fourth subsection). Second, since the concern here is with conceptual stability, I will discuss the potential problem of the concept being disrupted because the constitutive parts of the concept’s definition are disrupted (in fifth subsection).

As I explained in second section, “Conceptual challenges and conceptual stability,” avoiding counterexamples and conceptual challenges (broadly construed) will avoid the type of conceptual disruption that I am interested in (i.e., those reasons for disruptions that a boundedly rational agent would or should take seriously). Beyond a discussion of traditional counterexamples, I base the particular challenges on issues that have previously been discussed in the conceptual engineering literature.

Problem of bias

In this subsection, I will address some potential criticism of the Appropriate Access definition related to bias. A worry for thick concepts (i.e., concepts with both normative and descriptive elements) is that they are biased in some sense. However, in the case of the Appropriate Access definition, this is clearly not the case, since it is stakeholder neutral. That is, it is not biased toward any particular individual or entity. A related type of worry is that the concept appropriate access is incoherent. That is, the definition’s lack of bias leads to incoherent conclusions, such that x is secure if and only if x is insecure. This is discussed in Lundgren and Möller (2019), in which problems of information security in the Trump presidency serves as an illustration. On the one hand, the commander-in-chief needs access to certain information. On the other hand, Trump was not a very trustworthy protector of secure information, so he should not have had access.Footnote 8 Hence, Trump both should and should not have had access to the information. What we should conclude from these examples is that there are security dilemmas. A good definition should reflect that.

Conversely, we might think that bias can also be good and we might accordingly attempt to criticize the Appropriate Access definition for its lack of positive bias. There are two potential problems with a lack of positive bias for the given concept. First, we might think that security is a good, but if we apply the definition, then in some contexts, security is bad. For example, it is bad if some terrorists’ plans are secure. However, it is important not to conflate the normative goals of security with the ontological question of whether something is secure. Indeed, if we seek to gain access to or reveal some terrorists’ plan, then we first need to know if it is secure; to know if it is secure we need to apply the definition to understand what that means from the terrorists’ perspective. Simply put, what we seek is to infringe upon their information security. Thus, a definition that does not clearly say when their information is secure will not be helpful for a normative goal of improving everyone else’s security.

Second, we might think that the definition should give priority to how security ought to be ordered (or reveal patterns of security bias, for example). Such a normative idea seems to be in line with a socio-political agenda of amelioration (cf. Haslanger, 2000). However, if we want to achieve any social-political agenda, the same reasoning applies as above: if we want to change the security agenda (e.g., to achieve more equitable distributions of security in the world), we first need to know how security differs for different stakeholders; to know how security differs for different stakeholders, we need an unbiased definition that is stakeholder neutral. For those reasons, it seems best not to conflate the ontological question, with any normative endeavor.

Epistemic critiques

Various epistemic concerns have been raised concerning the conceptual engineering goal of explicating and replacing old concepts. For example, Kitsik (2022) argues that sometimes conceptual engineering engages in a form of epistemic paternalism (what she calls “paternalistic cognitive engineering”), in which conceptual engineers, without consent, interfere with someone’s belief formation in a way that violates their sovereignty.

A related worry one may have is what we may call “Orwellian thought oppression.” In 1984 (Orwell, 1949), George Orwell described what at the time was a future dystopian world, in which the nation Oceania is ruling its citizens through an oppressive surveillance and control system. While 1984 is a common example in discussions on privacy, in general, and mass surveillance, in particular, I want to turn to another aspect of the control system: the engineered language Newspeak,” which is introduced to replace ordinary English. The goal of Newspeak is to completely alter, or re-engineer, the English language in a way that limits the thoughts that agents can have in a way that suits the purposes of the ruling class. We can debate whether this kind of conceptual engineering is possible simpliciter, or to the extent Orwell describes in the appendix of his novel, but clearly, the aim of conceptual engineering can sometimes be an epistemic loss (Podosky, 2018).Footnote 9

Arguably, none of these worries apply. The definition cannot be said to interfere with anyone’s belief formation in any objectional sense. It provides an analysis of information security; it is left to the user whether to apply it in any given situation. It is not forcing any beliefs upon the user, beyond what any concept must (i.e., in such a way that it allows for the user to understand the concept as such). The worries about epistemic loss or limiting the user’s cognitive abilities do not seem to apply here either. On the contrary, if anything, the Appropriate Access definition requires the user to engage in counterfactual thinking when applying the definition. Thus, if it has any epistemic effects, it can only be that of epistemically boosting the agent using the definition. One may worry that such improvements could be an example of paternalistic cognitive engineering. However, here it is important to recognize that the worries that Kitsik raises have to do with a form of nudging (see, e.g., Thaler & Sunstein, 2021), while what the Appropriate Access definition may lead to is more similar to a form of boosting (see, e.g., Hertwig & Grüne-Yanoff, 2017)—a concept that is designed based on paternalistic worries about nudging. The idea behind boosting is that instead of nudging (based on paternalistic considerations) users to make a specific choice, we ought to boost their epistemic abilities so that they can—for themselves—make better choices in the future. Granted that the Appropriate Access definition must be applied, it can boost users’ abilities to understand different security scenarios.

Technological counterexamples

In this subsection, I turn to the possibility of (technological) counterexamples. Such counterexamples could prima facie come in three forms. First, some (possible) technology could make it so that some information I is secure even if some agent A has inappropriate access to some part P of I. Second, some (possible) technology could make it so that some information I is insecure even if all agents have appropriate access to all parts P of I. Lastly, some (possible) technology could establish that there are security dilemmas such that access is appropriate if and only if access is inappropriate.

Let me begin by addressing the last issue. As previously discussed, such dilemmas already exist. I have already mentioned the example of President Trump. Hence, security dilemmas cannot serve as a counterexample against the definition, unless it is the case that there shouldn’t be a security dilemma. In such a case I will argue that the definition has not been correctly applied, which is the response I have to the two previous potential counterexamples as well.

I will use an example to establish this point. Keep in mind that I am not defending the Appropriate Access definition; rather I am asking whether there could be counterexamples, granted that the definition is sensible in the present (technological) situation.

Today an encryption key with 2048 bits is considered secure for any application. Why? Because practically it cannot be computed within any reasonable time.Footnote 10 However, Quantum computers may change that. Hence, if some information I is secure for some stakeholder H because of encryption at time T, this may change if at time T + x if, for example, quantum computing has reached its full potential. However, that does not imply that there is anything wrong with the definition, it just means that we need to re-apply the definition to a new context. That is, we have a stable definition, but a disruptable application. Simply put, new technology can make a secure system/information insecure. However, that is not a challenge for the concept of security, it simply means that a system/information that was secure at time T may not be secure at time T + x.

One may worry that this just means that we have just pushed the problem of conceptual disruption, challenges, or counterexamples, from the definition to the application thereof. However, that is not entirely correct. We have pushed the issue, but not the problem. That is, disruption of application is not a problem, it is how it should be. Indeed, various developments of new technologies have previously made secure information insecure (or vice versa). That is the way that technological development works. However, that does not mean that we need to change the definition, it just means that we need to re-apply the definition when the relevant contextual factors have changed.

Is the definition empty?

Based on the fact that the Appropriate Access definition avoids the above challenges, one may worry that the definition is stable because it is empty. That is, just as “x is secure if and only if x is secure” is conceptually stable because it is empty, we might argue that the Appropriate Access definition is stable because it is empty.

In response to this argument, I will argue that it is not empty, but rather that it is on a high abstraction level. The simple reason to think that the definition isn’t empty is that it seems to provide a meaningful analysis. That is, unlike “x is secure if and only if x is secure”, it says something about what (information) (system) security is. The definition defines security as relational, stakeholder-relative appropriate access between agents and a security object.

To see this more clearly it may be illustrative to compare it with other concepts that have been analyzed in the literature. Take the example of privacy, which is often analyzed either in terms of limited-access or control (see, e.g., Lundgren, 2020):

  1. 1)

    Limited-access: A is in a condition of privacy relative to B if and only if B has limited access to A’s private matters.

  2. 2)

    Control: A is in a condition of privacy relative to B if and only if B lacks control of A’s private matters.

The appropriate access definition does not seem any more empty than standard definitions found in the privacy literature.Footnote 11

Is the definition’s constitutional part open for critique?

A worry one might raise is that any definition of a concept is just as stable as its main constitutive part. Indeed, suppose the concept of appropriateness is disrupted, then it seems as if it ought to follow that the Appropriate Access definition should also be disrupted.

While this is a complex issue that would deserve more than a full paper, I have two brief responses to this worry. First, I am not convinced that this worry always materializes. In the case of the Appropriate Access definition, the concept of appropriateness is arguably broader than how it is used in the Appropriate Access definition, which means that the function of the concept in the definition may remain unaffected even in the case that the concept is disrupted. That is, a disruption of a constitutive part, P, of a definition, D, of a concept, C, can only disrupt C in the case that P is disrupted in a way that affects the use or function of P within D.

Second, and relatedly, the concerns I have discussed in the previous subsections address some of these possible disruptions, which gives some credence to the non-disruptiveness of the constitutive parts of the Appropriate Access definition in so far as it matters for the stability of the definition or the concept of information security. Thus, while I cannot respond to this worry by showing that the concept of appropriateness can be given a stable definition (since that would arguably turn on the stability of its constitutive parts), I can argue that the proof of concept just established ought to apply to the constitutive parts of the definition as well. Of course, one may worry that there is a salient difference in so far as information security functions like a thick concept (i.e., if we accept the Appropriate Access definition), while appropriate is a purely normative concept. However, if my arguments are correct, it is not the descriptive parts of the Appropriate Access definition that makes it stable.

Can we generalize from the case study?

So far, I have argued that we can create stable concepts that can avoid disruption and counterexamples by looking to a particular example that—as I have argued—successfully avoids counterexamples and other conceptual challenges. In this section, I aim to discuss the basis for why the Appropriate Access definition is stable and what conclusions we can draw from it. As I have argued, what enabled the Appropriate Access definition to remain stable in light of potential challenges was that the definition of information security was separate from the application thereof. I will start by explaining the solution in a bit more detail (i.e., what does it mean to separate the definition from its application?). Next, I will turn to discuss how that relates to different competing theories of concepts (as mentioned at the end of the second section, “Conceptual challenges and conceptual stability”). Finally, based on this, I will turn to the question of whether we can generalize from the case study by looking at another concept (privacy).

The idea of distinguishing between the definition and the application thereof is grounded in the argument from Lundgren and Möller (2019) that a high abstraction level is needed for a definition of information security to provide necessary and jointly sufficient conditions. As I explained in the third section, “The Appropriate Access definition,” the leading alternative to their proposal is to define information security in terms of satisfying the properties of retaining confidentiality, integrity, and availability. However, as is argued in Lundgren and Möller (2019), although these properties arguably hold for most systems, they are neither necessary nor jointly sufficient. However, a definition that tells us to retain these properties arguably tells us more about what to do practically to ensure that some information is secure. Indeed, a person can understand the Appropriate Access definition without knowing the implications for a given stakeholder, given that this would require knowledge of the stakeholder; similarly, a person can understand the Appropriate Access definition without knowing how to satisfy it in practice, given that this could—beyond the knowledge of the stakeholder—require a relevant technical understanding.

To understand the distinction between definition and application it may be illustrative to compare with attempts to define moral rightness, as Möller and I do:

Attempts to define the concept [of moral rightness] in substantive terms have without exception failed, with only comparably non-substantive contenders such as ‘the act we desire to desire’ or ‘the act having the best consequences’ gaining partial support by moral theorists. (Lundgren & Möller, 2019, p. 431, my addition in brackets)

The act having the best consequences is similarly “incomplete” since we must know what the options in a given situation are, what the consequences of those options are, and how to rank those choices normatively.

Thus, the basic idea that I am advocating here is that we must leave some questions about a concept’s application to practical judgment, which is sometimes informed by empirical facts such as the context of the situation or knowledge about technology or other matters that have relevant consequences for the given concept.

Another illustrative comparison is that the distinction functions pretty much like an indexical (e.g., “I”, “you”, “here”, and “now”). That is, we can define an indexical but any definition must also be sensitive to the context. Contrarily, a definition that attempts to define an indexical while ignoring the context would fail.

The above discussion hopefully explains the model that I have promoted here. However, one needs to recognize that this method raises substantial philosophical questions about how much a good definition, analysis, or conception of x ought to say about x. What should be left to an applied analysis of x after a stable definition of x has been settled? And what analysis must inherently be part of the definition? In some cases, we may be more worried about the high level of abstraction implied by the model used in Lundgren and Möller (2019), while in other cases we may think that conceptual stability is one of the most important criteria of adequacy for any analysis, definition, or conception.

In particular, we may worry about whether it is actually—as I claimed earlier—compatible with any standard theory of concepts. To begin, if we understand concepts as abstract objects, perhaps following Peacocke (1992), it will be essential that the definition of a concept is what distinguishes it as a concept:

Concepts C and D are distinct if and only if there are two complete propositional contents that differ at most in that one contains C substituted in one or more places for D, and one of which is potentially informative while the other is not (p. 2).

Alternatively, if we understand concepts as abilities (see, e.g., Dummett, 1996), what I propose could imply that we might have to distinguish between different abilities (which may also vary between different concepts). In the case of information security, we can distinguish between definitional abilities, the ability to apply the concepts, and the even more practical abilities related to technological prowess.

Similarly, for the cognitive theories of concepts, we might have to distinguish between different forms of mental representations (see, e.g., Fodor, 1987; see also Margolis & Laurence, 2007 for a mixed view). Nota bene, I am not saying that this does not raise philosophical questions; rather what I am saying is that we have no reason to think that different theories of concepts raise challenges for what I am arguing for in this paper.

Now, let us turn to the question of whether this solution can be generalized. One way of answering this question is to ask if it can provide a roadmap to a solution for debates where the critique of definitional attempts has gained traction. It is illustrative to consider (as I did in section the fourth subsection in fourth section, “Is the definition empty?”) the debates on the concept of privacy and the right to privacy. In the last 15 years, there have been influential voices that have promoted methods of analysing privacy and/or the right to privacy in a way that avoids definitions. For example, Daniel J. Solove, in his Understanding Privacy, argues that the conceptual disagreements are due to an erroneous approach: that is, the failure to find an agreement on how to analyse the right to privacy (which is arguably what Solove is engaged in, although he uses the term “privacy”) is because we have—or so Solove argues—focused on attempting to define it.Footnote 12

Solove proposes that instead of defining the right to privacy, we ought to understand it along the lines of the Wittgensteinian notion of family resemblance. However, there are a multitude of problems. It is not clear how we make a judgment about whether something is a privacy concern or not and therefore protected by the right to privacy (e.g., what is sufficiently similar?). And while Solove thinks that attempts to define privacy have failed because the definitions are too broad, too narrow, or too vague, his proposal doesn’t hold up to his critique of other alternatives. Consider the following example. In the book’s second chapter, Solove sets out to show that all definitional attempts have failed. Turning to the famous analysis of the right to privacy by Samuel Warren and Louis Brandeis (1890), Solove says:

The right to be let alone views privacy as a type of immunity or seclusion. As many commentators lament, defining privacy as the right to be let alone is too broad. For example, legal scholar Anita Allen explains, “If privacy simply meant ‘being let alone,’ any form of offensive or harmful conduct directed toward another person could be characterized as a violation of personal privacy. A punch in the nose would be a privacy invasion as much as a peep in the bedroom.” (2008, p. 18).

Later in the book, when Solove presents his taxonomy, which includes protection against “intrusion,” he says:

“Intrusion” involves invasions or incursions into one’s life. It disturbs the victim’s daily activities, alters her routines, destroys her solitude, and often makes her feel uncomfortable and uneasy. Protection against intrusion involves protecting the individual from unwanted social invasions, affording people what Warren and Brandeis called “the right to be let alone.” (2008, p. 162).

The contradiction is glaring. On the one hand, the right to be left alone is too broad a conception of the right to privacy. On the other hand, Solove recognizes that his taxonomy implies a right to be let alone. This is just one example of a more general problem, but it is illustrative of how some of the theoreticians belonging to the tradition of Solove and others proceed as if contextual factors weren’t recognized in the debate on privacy and the right to privacy until they brought attention to it.

It is worthwhile to consider whether the model I have considered in this article can be helpful in these cases: that is, properly distinguishing between the definition and the application of the definition. By doing so, we can recognize that some conceptual works need to be done in context while safeguarding the possibility of a stable and well-defined concept.

While Solove seems to think that the problem with defining privacy (or the right thereof) means that we must turn to a definition-free conception, another alternative is to use the model that I have proposed. Thus, although I am not promoting Warren and Brandies’ conceptual idea—on the contrary, I think their analysis is flawed—we can ask whether the model I propose can help if we think that their analysis has captured some of the essences of the relevant concepts. That is, under the presumption that solitude captures the essence of privacy and that the right to be alone captures the essence of the right to privacy, we could proceed to say that we must analyze what this means in context.

Arguably, we should require more from a definition than what Warren and Brandies have offered, but it may not be the case—as Solove presumes—that a definition of the right to privacy needs to settle all issues that normally are considered conceptual issues so that no further conceptual work is needed. Indeed, it may be that we need to apply a definition in context to see whether, for example, punching someone is the type of action that would violate the right under the definition.

Because I am critical of their definition, I will not attempt to show how such a process would proceed (i.e., unlike the Appropriate Access definition, which is created to have this dual function, I do not think that Warren and Brandeis’ analysis is sufficiently well-suited for this purpose). My point here is merely to note that there are alternative methodological usages of definitions available than what Solove has considered. In particular, as my analysis of the Appropriate Access definition illustrates, we can achieve conceptual stability and contextual sensitivity by leaving part of the conceptual work to the application of the definition. That is, the failure of definitions in isolation does not rule out the possibility of new definitions or the combination of definition and application.

Possibly, many definitions of (the right to) privacy can avoid certain types of counterexamples and challenges by applying this model. However, it is also central to ask what the limit of the model is. Consider the following problem debate: As I argued in Lundgren (2020), analysis or definitions of privacy based on control over some private matters (e.g., private information) suffer from a dilemma of counterexamples such that if it avoids one of them, it suffers from the other (technically, the individual counterexamples are avoided by changing the conception of control). Moreover, the only way to avoid this dilemma is a path that collapses the control-based definition into a limited-access-based definition. More recently, Menges (2021) suggested a new way of understanding the notion of control that avoids collapsing into a limited-access definition (but see also Mainz, 2021, who raises the question of whether this collapse is indeed avoided).

One question we can raise is whether we can accept that the notion of control remains unsettled in the definition and whether we can avoid counterexamples by settling it only in the application. Plausibly, for any counterexample, there is some notion of control that avoids it (indeed, this is the point I raise in Lundgren, 2020, while showing how such modifications suffer from another counterexample).

However, this would imply an extreme relativism, which I do not think is acceptable. The difference between this example and the model I suggest is that it alters the definition since the definition of control would change from one situation to another, while in the case of the Appropriate Access definition, the concept of appropriate access remains constant, even if what is appropriate access differs depending on the given stakeholder and the context.

Hence, it should be recognized that some counterexamples and other conceptual challenges cannot be overcome by pushing some questions from the definition to its application, but this is as it should be since if all problems could be resolved, there would be a plethora of competing stable definitions of what should be the same concept.

Finally, it may be illustrative to go back to the comparison between the Appropriate Access definition and the CIA definition. While the former is contextual sensitive and allows for application, in any given context, the latter is rigid. In the third section, “The Appropriate Access definition,” I mentioned how availability is defined in the ISO 27000 standard. However, it is worthwhile to briefly consider one of the other properties, that of confidentiality, which is defined as the “property that information is not made available or disclosed to unauthorized individuals, entities, or processes” (ISO 27000 standard, 2016, p. 3).

As you can clearly see from the given example, there is little room for contextual application in the CIA definition. Consider how confidentiality (as well as availability) depends on the notions of who is authorized/unauthorized. The problem is that principles for authorization are never perfect. As discussed in Lundgren and Möller (2019), a correct authorization process does not entail that the authorized individual should be authorized and their property of being (un)authorized neither entails that they should (or should not) have access to some specific information. While information security is sometimes threatened by unauthorized individuals, it can also be the case case that authorized individuals access is improper.

While the rigidity and clarity of these process in the CIA definition might be helpful for various purposes, such as adherence to legal requirements, it is a definition that will be sensitive to counterexamples, and it will sometimes conflict by sensible choices in practical situations. One may think that these problems can be solved by defining the CIA concepts on another abstraction level. However, the solution to stakeholder-relativism in the Appropriate Access definition does not depend on making a rigid notion contextually sensitive, it depends on recognizing that a contextually sensitive notion is the right notion for the purpose. Trying to make a rigid notion contextual sensitive would run into similar problems as discussed earlier relative to defining or analysing privacy in terms of control.

Final comments

In this paper, I have argued that we can create, design, or define concepts that remain stable against counterexamples and other conceptual challenges. My argument for this was based on a case study of the Appropriate Access definition—a definition of information security (a concept that because of its technological context should be extra-sensitive to technological change). As I argued, the Appropriate Access definition can avoid the challenges of (possible) technologies because it places itself on an abstraction level that is independent of contextual factors; instead, contextual factors must be considered when applying the definition.

Moreover, I argued that despite the arguably normative nature of security, normative critiques (for either ethical, social, or epistemic reasons) will not be relevant because the definition is stakeholder neutral and even if bias is motivated (e.g., ethically motivated because we want security for good, not for bad, purposes), we still need a stakeholder neutral definition to understand when an object is secure.

Lastly, I suggested that the case study has the potential to be generalized and that the model used in Lundgren and Möller (2019) can be applied in other contexts. For example, in the debate on privacy and the right to privacy, there is disagreement about whether we can (or should aim to) arrive at a stable definition. Arguably, by distinguishing between the conceptual work done by defining a concept and the conceptual work done by applying the definition in context, we can potentially have a methodological roadmap to resolve some of the conceptual quandaries in other debates as well. Of course, more work needs to be done on this.