Responsibility Modelling and Its Application Trust Management

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9750)


A narrative of trust can be constructed via an explicative dialog that views trust as both a technical and social construct. From a technical viewpoint trust can be measured in-terms of reliability and dependability, while from a socio standpoint trust can be viewed as (a) the Need to trust, trust based on Identification, trust based on Competence and finally trust based on Evidence. In this paper we will develop a socio-technical model of trust that utilises the concepts of responsibilities and roles so as to link the technical and social aspects of trust into a single inductive logical framework. A role can be viewed from a structural and functional perspective allowing us to express the concepts of behaviour and within a socio-technical system and to logically reason about. We will further develop a logic graphical model of responsibility using both causal and consequential modal operators. From this model we will explore the relationship between the elements of a tasks execution and the actions communication associated with a task and hence a responsibility. We will use this logical model to show how a argument of consistency can be constructed and from that a measure of trust within a socio-technical construct derived.


Technical System Role Operator Model Business Process Kripke Semantic Role Relationship 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Socio-Technical systems (STS) in organisational development is an approach to complex organisational work design that recognises the interaction between people and technology in workplaces [3, 5]. By socio-technical systems we mean people (individuals, groups, roles and organisations), physical equipment (buildings, surroundings, etc.), hardware and software, laws and regulations that accompany the organisations (eg. laws for the protection of privacy), data (what data are kept, in which formats, who has access to them, where they are kept) and procedures (official and unofficial processes, data flows, relationships, in general anything that describes how things work, or better should work in an organisation) [4].

The target is to construct a framework that will allow us to reason about the level of trust as a stateful model on a socio - technical systems level so as to better capture the dynamics of a cybernetic organisation and its state of affairs. It is in the cybernetic organisations nature that we can find the arguments for the need of a more social approach, as they are biological as much as they are artificial. Within the context of responsibility modelling and Socio-Technical systems we can define trust as follows:

Trust is a particular assessment of an agent or group of agents will fulfil a responsibility.

The socio - technical systems (STS) have as a main target to blend both the technical and the social systems in an organisation [9, 10]. That is considered necessary as both aspects are of equal importance but independently, optimal arrangements for both might not be applicable and trade offs are required [3]. We will use stateful models to express the status quo of an organisation, i.e. the current state of the systems, personnel and processes at each discrete moment before and after an event has occurred. This is going to give us a better perspective of the dependencies, responsibilities and finally reliabilities that run through the entire hierarchical chain of an organisation. Thus it will allow us to be able to run different threat scenarios and detect the potential vulnerabilities in a corporate network through forward and backward chaining and thus to identify when trust is compromised.

2 Bakground

Socio - technical systems are focusing on the groups as working units of interaction that are capable of either linear cause - effect relationships, or non - linear ones more complex and unpredictable [3]. They are adaptable to the constantly changing environment and the complexity that lies in the heart of most organisations. The concept of tasks, their owners, their meaningfulness and the entire responsibility modelling as well as the dependencies are also a big part of this theory. In this study we treat people and systems as actors of certain tasks over a state of affairs. They are agents that comply with the same rules and norms, when it comes to the way they operate and interact with other agents for the accomplishment of a state of affairs. By agents we mean individuals, groups of people or systems that hold roles and thus responsibilities for the execution or maintenance of certain tasks with certain objectives.

Along with the socio - technical systems approach we will use Role Theory on the agents as each one of them in an organisation fulfils some roles in association with certain states of affairs [11]. Role Theory emphasises the fact that roles are basically sets of rights and responsibilities, expectations, behaviours or expected behaviours and norms. Peoples behaviour in organisations is bounded by specific context subject to both social and legal compliance, depending on their position in the hierarchy. The objective of this is to be able to assist the performance of Responsibility Modelling on the socio - technical systems [2], to analyse their internal structure, the responsibility flows and the dependabilities. This will provide us with the necessary information and structure upon which we can apply scenarios that simulate behaviours deviating from the expected (e.g. attack scenarios), along with logical rules that best describe the organisation at hand, its expected behaviour and targets and that will allow us to locate vulnerabilities in the supply chain and express cause and effect, in case anything changes to the environment beyond expectation.

Different types of threats and countermeasures, different exposures, the variety of information and the heterogeneous data make it hard to manage risk. Thus, it is clear that, given the level of complexity of Information Systems Security (ISS) risk management, simple linear models as proposed in most of the existing approaches will not be able to capture such complexities. [4]. For this reason, we suggest the socio - technical systems approach combined with Role Theory and eventually Responsibility and Interaction Modelling, as we think it works much better than linear models and is far more capable of mapping down the complex relationships that more realistically represent organisations of any size. It is the nature of the information they provide that makes them appropriate for impact assessment and vulnerability analysis.

3 Expressing and Modelling Responsibilities

Responsibility modelling is the analysis and design technique of the responsibilities within an organisation with the purpose of exploring the internal structure and the dependabilities in the socio - technical system [2]. It is one way of exploring the relationships amongst personnel, technical infrastructure, resources and business processes. What is interesting is that the risk associated with any deviation from the expected behaviour can be explored. In the event of an unanticipated change, a before and after analysis can determine what effect the event had on the socio - technical system. In this situation, applying vulnerability analysis will help illustrate the systems strengths and weaknesses and reveal the associated risks.

According to dictionary definitions, responsibility has two meanings:
  • The state of having a duty to deal with a certain state of affairs.

  • The state of being accountable or to blame for a certain state of affairs.

The first case has a causal connotation meaning the agent has the responsibility for doing something - making an event happen. The second case has a connotation of blame between the actual action and the results of it but does not necessarily imply causality for the agent held accountable. For example, the parents are held responsible for the actions of their children. As a result, two types of responsibilities can be distinguished, a causal responsibility and a consequential responsibility [2]. For instance, each member of a crew of a ship or a plane is causally responsible for the performance of certain tasks but the captain or the pilot is always consequently responsible for the state of the ship or plane. Within the context of a responsibility we can explore the following:
Fig. 1.

A responsibility model

  • Role assignment: All roles must be executed by am agent and that agent must have had the role assigned to it.

  • Role Authorisation: An agent must be authorised by another agent to take on a specific role.

  • Rights and Mode Authorisation: A agent can only engage in the usage of an right/mode on the context of a role that they are authorised to execute.

Responsibility is associated with agents, resources and tasks as defined in the Ontology for a Socio-Technical Systems Model (See Fig. 2), and it is defined as the duty from one agent (the responsible) to another (the authority or principal) for the accomplishment of a state of affairs, whether this is the execution, maintenance or avoidance of certain tasks, subject to conformance with the organisational culture (Fig. 2). Thus the characteristics of a responsibility consist of: who is responsible to whom, for what state of affairs, which are the obligations of the responsibility holder in order to fulfil his responsibility and what type of responsibility it is [2]. There are two type of agents that can exist within a socio-technical system: (a) a human and (b) a computer system.

  • Causal responsibility lies effectively between one agent and a state of affairs, while the consequential responsibility is a three way relationship between two agents and a state of affairs. In this case the agent who holds the responsibility can be held accountable, culpable or liable to the authority agent as seen in Fig. 2. Apparently, the most important part of the diagram for the consequential responsibility is the relationship between two agents as the most important question to be answered is who is responsible to whom and in what respect?. On the other hand, for the causal responsibility the most important part is the relationship between the agent and the task as the most important question to be answered is who is responsible for this action?. Causal responsibility is a dynamic behavioural relationship between an agent and a state of affairs.

  • Consequential responsibility indicates the organisational relationships with in organisations and their objectives. Due to its nature consequential responsibility may be held by more than one agent, it could rest upon an entire organisation, whereas the causal usually lies upon one agent. However, the latter can also be delegated from one agent to another, causal responsibility is normally not capable of that although it can be transferred. Type of consequential responsibility include:
    • Accountability. This is where an responsibility holder is required by the responsibility principal to give an account of the actions through which the agent has failed to discharged a causal responsibility. This account can take various forms such as a verbal or a written statement. Both a human and a computer agent can be held accountable. For a computer system to fulfil a responsibility it is required to keep an account of its actions.

    • Liability. This is where an responsibility holder is required the responsibility principal to be liable for some form of recompense with regard to the failure to causal responsibility. The liability typically take the form of some resource, such as money, that is given to the responsibility agent that the failure of the responsibility holder has impacted upon. Both a human and a computer agent can be held liability. However for a computer system to fulfil a liability it is required to have an understanding of the resources that are at stake and the implications of the loss of these resources so that informed actions can be taken. A key distinction between a human and a computer is that a computer can only be held liable for informed actions, where as a human can only be held liable for both informed and uninformed actions.

    • Culpability. This is where an responsibility holder is required by the responsibility principal to be culpable of the actions through which the agent has failed to discharged a causal responsibility. Culpability typically take the form of blameworthiness and results in the withdrawal of some right from the responsibility holder. Typical forms of culpability resulted in the imprisonment of the responsibility holder. A computer system can not be held culpable and punish as a computer system has no social, moral and ethical framework within which to understand culpability.

The key distinctions between a causal responsibility and a consequential are:
  • Within causal responsibility only a single agent can hold a responsibility, where within consequential responsibility multiple agents can hold a responsibility.

  • Within causal responsibility the responsibility target is a state of affairs representing a up-set of the state of affairs. Where as with consequential responsibility the responsibility target is the definition of a role relationship between two agents.

4 Sematics of an STS Model

In order to reason about a socio—technical system (STS) model we need to be able to express the semantics of the model such that a variety of modal action logic operators can be utilised. To express the semantics of a socio—technical model we make use of a Kripke Frame, [13, 14] \(\big < W,R\big>\), where W is a non-empty set defining a set of possible worlds or state of affairs, and R is a binary relation on W such that \(R \subseteq W \times W\). The relation R is formed from the interactions that a role holder performs in moving an organisation from one possible world \(w_1\) to the next \(w_2\). Elements in W are called worlds and R is known as the accessibility relation.

A Kripke Frame utilises a possible world semantic model to express reasoning constructs. From a socio—technical system we can say that elements of W can include:
  • Responsibility Types and Responsibilities.

  • Agents, Roles and Resources.

  • Access Rights.

A Kripke Model/Structure is a triple \({\big < W,R,\Vdash \!\!\big >}\) where \(\big < W,R\big>\) is a Kripke Frame and \(\Vdash \) is a relation between nodes and formulas, and is called the labelling function, such that:
$$\begin{aligned} w&\Vdash \lnot A \text { if and only if } w \not \Vdash A\nonumber \\w&\Vdash A \rightarrow B \text { if and only if } w \not \Vdash A \nonumber \\&\text { or } w \Vdash B \nonumber \\w&\Vdash \square A \text { if and only if } u \Vdash A \text { for all} \nonumber \\&u \text { suchthat } w R u\end{aligned}$$
From a definitional perspective we can read \(w \Vdash A\) as meaning “w satisfies A”, “A is satisfied in w”, or “w forces A”. The relation is called the satisfaction relation, evaluation, or forcing relation. The satisfaction relation is uniquely determined by its value on propositional variables. We can view the satisfaction relation as a binary trust operator. A formula A is valid for:
$$\begin{aligned} \text {A}&\text { model} \big< W,R,\Vdash \!\big>, w \Vdash A \text { for all } w \in W \nonumber \\ \text {A}&\text { frame } \big< W,R\big>, \text {if it is valid in }\big < W,R,\Vdash \!\big > \nonumber \\&\text { for all posisble} \Vdash \nonumber \\ \text {A}&\text { class }{C} \text { of frames, if it is valid in every} \nonumber \\&\text { member of }C \end{aligned}$$
We can construct a model of an STS as a possible world denoted \(W_{STS}\) using atoms such as responsibility, roles, etc., and defining a set of relation \(R_{STS}\), over the elements in the possible \(W_{STS}\), such that a Kripke Frame is defined as follows: \(M_{STS} =\big < W_{STS},R_{STS}\big>\). Via the application of Kripke Frame semantics we will be able to construct a set of formula through which we can reason about the security requirements and explore the security impact of threat scenarios against an STS and thus allows us to express trust requirements and patterns of behaviour that conform to trust.

5 Role Definition

At its simplest level the concept of a role is used to define behaviour in terms of a set of rights, duties, expectations, norms and behaviours that a person has to face and fulfil [1]. In expanding this concept with the a model of a socio technical system we may say that [1]:
  • A role defines the division of labour and takes the form of a series of interactions between two roles. A role always stands in a relationship with another role and it is through this relationship that interactions flow.

  • A role is both a descriptive and a normative concept that can be used to represent many different organisational realities from the structured to the unstructured.

  • A role is to be treated as the basic building blocks that make it possible to move between organisational requirements and the requirements of individual agents. (e.g. from the organisation’s role in a project to the way these responsibilities devolve to the roles of members of the project team).

  • A role defines norms of behaviour and thereby system requirements. Roles include “appropriate” and “permitted” forms of behaviour, guided by organisational norms, which are commonly known and hence determine expectations and requirements.

  • A role defines the relationships between role holders and the behaviour they expect of one another, which in turn defines many environmental requirements.

When modelling a socio-technical system we distinguish between different types of roles relationships, such as: master–slave, supervisor–subordinate, peer–peer and supplier–customer.

A role is only meaningful when it stands in relationship to another role. A role functions to define the tasks/interactions that an agent must execute in collaboration with other agents in order to fulfil a responsibility. A role must be performed by an agent and an agent can only fulfil a consequential responsibility through the execution of one, or more, roles. For a role to be meaning it must stand in relationship tot another role. Hence our concept of role allows us to distinguish the following:
  • Agencies and agents with associated responsibilities to other agencies and agents.

  • Tasks that interact through the utilisation of resources and are structured into actions and operations.

A formal type model for role– relationships enables us to represent and analyse the relations between functional and structural concepts and to express the way in which they operate in real organisations. An agent fulfils a responsibility via the performance of a set of roles in a logical sequence or pattern.

6 The RAR Model

A key requirement for the development of a impact assessment system is the ability to express taxonomic and ontological structures that represent an enterprise view of a cybernetic socio-technical model of an organisation. Such structures are depicted in Fig. 2. The socio-technical cybernetic enterprise model shown in Fig. 1 presents a basic taxonomy and ontology of roles, acts and resources. Roles are viewed as primary manipulators of the state or structure of the system, and a act is the only object that can create, modify or destroy other objects. Acts are the operations that change the state of the system, and they are performed by roles.
Fig. 2.

A responsibility model

All actions must induce state changes in the system that is visible. The resources can be of two types: physical or logical, where physical resources are tangible objects such as servers, and logical resources include information, time etc. When modelling organisations at the enterprise level, resources act either as tokens of responsibility signifying that an agent has a binding responsibility upon it, or as objects for which some agency is responsible. An important type of logical resource is data. When data are passed from one action to another interactions occur, the data being the bearer of those interactions.

The basic component of our model takes the form of an entity-relation schema defining three sorts of entities: roles, acts, and resources. It defines a set of relations between these entity types. The basic entities are of three types:
  • Roles:
    • Roles are the used to define the nature/type of behaviour used to fulfil a responsibility.

  • Actions:
    • An act is to be distinguished from the doer of the actions. Thus an act is a functional answer to a what question, and takes a verbal form.

  • Resources:
    • Resources are answers to with, or by-means-of-what questions. When modelling business process a resource is known as a data object [8].

Between these three types of basic entities there are six kinds of relations:
  • Act-Act
    • Acts interact with each other. Such interactions are usually mediated by the exchange of resources, though direct interactions, such as interrupts, can also occur.

  • Act-Resource
    • The relation between an activity and a resource is an access mode, such as reads or writes (for information resources) or provides or consumes (for commodity resources).

  • Resource-Resource
    • The relation between resources is what in information technology terms is called, the conceptual schema.

  • Role-Resource
    • The relation between an agent and a resource is an access right, such as the right: to-create, to-destroy, to-allocate, to-take-ownership-of.

  • Role-Act
    • The set of acts with which an role constitutes the behaviour associated with that role. By the role-act relationship we mean two related things: a capability exists to perform act and this capability by virtue of some legal instrument can be enforced by recourse to something outside the system (e.g. judicial). For example, we can make some elementary distinctions between the role-act relationships as follows:
      • \(\bullet \) The Observer of an act knows that it is taking place.

      • \(\bullet \) The Owner of an act has the ability to destroy it; (the owner of an act may differ from the creator of an act, since ownership can be transferred).

      • \(\bullet \) The Customer of an act has the ability to change its specification.

      • \(\bullet \) The Performer of the act is the role responsible for executing the act and performing the interactions.

  • Role-Role
    • The set of roles with which an role has some relation constitute the structural role of that agent and relates to the responsibilities and obligations that bind agents together in webs that form organisational structures.

An agent fulfils a responsibility via the performance of a set of roles in a logical sequence or pattern [12]. We can define a set of Boolean functions that define the order within which a set of roles are executed. For example, within a hospital, a doctor agent cannot deliver a treatment to the patient until the doctor has diagnosed the patient’s illness.
Fig. 3.

Responsibility and roles

The role operator \(\Omega _{[a]} \big ( r_1, r_2, ., ., r_n\big ) \rightarrow \{\top , \bot \}\) says that the agent a performs the ordered set of roles and/or role operators \(r_1, r_2, ., ., ., r_{n-1}, r_n\) in a linear sequence. The role \(r_{n-1}\) must be complete before the role \(r_{n}\) can be performed, and the role \(r_{1}\) is the initial role in the sequence. The role \(r_{n-1}\) may be said to be functionally dependant on the role \(r_{n}\).

The role operator \(\Delta _{[a]} \big ( r_1, r_2, ., ., r_n\big ) \rightarrow \{\top , \bot \}\) says that the agent a performs the ordered set of roles and/or role operators \(r_1, r_2, ., ., ., r_{n-1}, r_n\) in parallel. The role \(r_{n-1}\) is performed at the same time as the role \(r_{n}\) and there is no functional dependancy between the roles \(r_{n-1}\) and \(r_{n}\). From a temporal perspective within this function, roles can start and terminate at different points in time, but for the function to be true all roles must be successfully performed.

An agent a fulfils a responsibility e by performing a set of role operators \(\{o_1, o_2, . . ., o_n\}\) in a structured order, denoted by the operator \(\nabla _{[e]}\). Within Kripke Semantics we can express when a responsibility e is fulfilled as for a Kripke Frame M and a state of affairs w: \(M, w \Vdash \nabla _{[e]} \rightarrow \{o_1, o_2, . . ., o_n\}\). We can identify the agent responsible for performing a role operator via the function \(\psi \), such that
$$\begin{aligned} \psi (\Delta _{[a]} \big ( . . .\big )) \rightarrow a \text{ and } \psi (\Omega _{[a]} \big ( . . .\big )) \rightarrow a \end{aligned}$$
We can identify the roles and role-operators that an agent performs via the function \(\psi ^{-1}\), such that
$$\begin{aligned} {\begin{matrix} \psi ^{-1}(\Delta _{[a]} \big ( o_1, o_2, . . ., o_n\big )) \rightarrow \{o_1, o_2, . . ., o_n\} \\ \psi ^{-1}(\Omega _{[a]} \big ( o_1, o_2, . . ., o_n\big )) \rightarrow \{o_1, o_2, . . ., o_n\} \end{matrix}} \end{aligned}$$
Within Fig. 3 we can say that the responsibility e is fulfilled via the pattern that conforms to the sequential execution by agent a of the roles \(r_1\), \(r_2\) and \(r_5\) and the parallel execution, by the agent a, of the roles \(r_3\), and \(r_4\). By combining our role functions we can assert the following with reference to Fig. 3.
$$\begin{aligned} \nabla _{[e]} \rightarrow \{\Omega _{[a]} \{ r_1, r_2, \Delta _{[a]}\{ r_3, r_4\},r_5\} \} \end{aligned}$$
When processing this structure we will require a function \(\Sigma \) that extracts all of the role operators and returns them as a partially ordered set. For example, using the pattern illustrated in Fig. 3, we can define the \(\Sigma \) function as follows:
$$\begin{aligned} \Sigma (\nabla _{[e]}) \rightarrow&\{ \\&\Omega _{[a]} \{ \{ r_1, r_2, \Delta _{[a]}\{ r_3, r_4\},r_5\}, \\&\Delta _{[a]}\{ r_3, r_4\} \} \} \end{aligned}$$
A role is performed within the context of a responsibility and by an agent. We can define an agent’s permission to perform a role as consisting of the following as defined in Fig. 2:
  • The Authority to perform the role.

  • The Capability to perform the role.

  • The Access Rights to perform the role.

7 Functional Schema

We can decompose the behaviour of a role into a set of linguistic constructs and interactions, called speech acts, that the role holder executes. This behavioural specification forms the definition of the function that the agent is required to execute to fulfil a responsibility, and when modelling business a process role relationship is known as a swim lane [8]. With the model of a socio-technical system we can model behaviour as a set of interactions where each interaction is a directed graph of a set of speech acts. The execution of these sets of interactions allows us to validate the hypothesis the a set of roles meet a trust requirement.

A speech act is a basic unit of communication being a spoken or written utterance that results in meaning being assigned to a linguistic expression [6, 7]. Speech acts always involve at least two agent roles: speaker and hearer (though these can on occasion be the same individual). Speech acts form larger wholes called conversations, which exhibit systematic regularities that can be studied and analysed.

An example of a conversation is to authenticate someone; it consists of a number of speech acts such as requesting a token of identification, confirming its authenticity with an authority, and finally accepting the individual as genuine (or not). Speech acts are modelled as taking place over the obligation interface link between structural roles. There are several types of speech acts, of which the four main categories are propositional acts, illocutionary acts, perlocutionary acts and instrumental acts.

  • A propositional act is a statement which can be evaluated to be true or false, such as “It is Christmas Day today” or “There’ll always be an England”. Propositional acts can be evaluated to be true or false (though it may not be easy to determine which, as in the second of our examples). We do not say anything more about how the evaluation is performed, nor about the theory of truth we are assuming. Propositional acts can meaningfully be uttered by people or machines.

  • An illocutionary act, or illocution, is always performed when a person utters certain expressions with an intention for example, “I promise to write the letter” or “I refuse to pay the bill”. When the intention has been recognised by the hearer(s), the illocutionary act has been successful, and we say its meaning has been understood. Questions of the truth of an illocution do not arise; rather, the act creates a commitment that in a moral sense binds the future behaviour of the parties and pledges them to certain activities or expectations. Illocutionary acts can be expressed through mechanical means as well as vocal means. They are, however, always an expression of human concern or intention.

  • A perlocutionary act, or perlocution, is an act that produces effects on the feelings, attitudes or behaviour of the hearer(s), for example to get someone to write a letter on request. Again, truth of a perlocutionary act is not an issue; success is, and occurs when the perlocutionary act has its desired effect. When modelling business process an perlocutionary act can be viewed as an event [8].

  • Instrumental acts not only facilitate the examination and formalisation of when, where and by whom a task is performed, they also aid the elucidation and identification of what resources are required and manipulated by the agent entity performing the act. An instrumental act can be decomposed down into small instrumental acts. An example of an instrumental act, is when the electrician agent fixes an appliance for the homeowner person agent. In performing this act the agent is not only discharging his responsibility but also consuming a resource. When modelling business process an instrumental act can be viewed as an activity [8].

8 A State of Affairs

A state of affairs within the context of Kripke semantics is defined as part of a Kripke Frame \(\big < W_{STS},R_{STS} \big>\), such that a possible world w is defined as \( w \in W_{STS}\). The nodes and relations that can exist with a possible world and a state of affairs, as follows:
  • A Responsibility type defines the nature of the responsibility in terms of consequential or causal constructs. Let \(R_t\) be the set of responsibility types that exist in a socio—echnical system.

  • A Role is a norm/patterns of behaviour within the context of a responsibility. Let \(R_o\) be the set of roles in a socio—echnical system.

  • A Resource relates to the source or supply from which we can derive benefit, and hence we can say that the resource is consumed in the context of a task. Let \(R_c\) be the set of resources in a socio—echnical system.

  • An Access Right is the ability to perform basic actions on a resources such as: read, write, create, amend, take—wnership—f give—wnership—o and destroy.

  • A responsibility is the modal operator \(\square \) expressing the relationship from one agent (the responsible) to another (the authority or principal) for the accomplishment of a state of affairs. Let \(A_{resp}\) be the set of responsibilities in a socio—echnical system

  • A responsibility operator \(\nabla \) defines the sequence/patterns of role operators required to be performed by an agent to fulfil a responsibility.

  • The role operators \(\Omega _{[a]}\) and \(\Delta _{[a]}\) define the sequencing order for the performance of a set of roles.

Now that we have defined the state of affairs \(S_t\) as part of a Kripke Frame \(\big <W_{STS}, R_{STS}\big>\) and hence a possible world \( w \in W_{STS}\), such that \(S_t \subseteq w\), we can define a security requirement as the \(\Vdash \) operator. The goal in asserting a set of requirements is to identify where a contradiction or omission is present in the model. For example, an agent cannot be responsible for \(\alpha \) and \(\lnot \alpha \) at the same time within a Kripke Semantic Model. Such a thing would be viewed as a contradiction.

9 Summary and Conclusions

In this paper we have sought to demonstrate how the application of a formal approach to the modelling of a socio—echnical system can be used to model trust. We have achieved this by the development of a set of logical operators relating to what a responsibility is and how it is fulfilled via the performance of a set of roles. In particular the application of hard— and soft—contradiction within a Kripke Model can be seen as a powerful tool allowing us to express and validate trust operators.


  1. 1.
    Biddle, B.J., Thomas, E.J.: Role Theory. Wiley, New York (1966)Google Scholar
  2. 2.
    Dewsbury, G., Dobson, J.: Responsibility & Dependable Systems. Springer, London (2007)CrossRefGoogle Scholar
  3. 3.
    Fox, W.M.: Socio-technical system principles and guidelines: past and present. J. Appl. Behav. Sci. 31(1), 91–105 (1995)CrossRefGoogle Scholar
  4. 4.
    Sun, L.: An information systems security risk assessment model under Dempster-Shafer theory of belief functions. J. Manag. Inf. Syst. 22(4), 109–142 (2006)CrossRefGoogle Scholar
  5. 5.
    Cooper, R., Foster, M.: Socio-techical systems. Am. Psychol. 26, 467–474 (1971)CrossRefGoogle Scholar
  6. 6.
    Searle, J.R.: Speech Acts: An Essay in the Philosophy. Cambridge University Press, Cambridge (1984)Google Scholar
  7. 7.
    Thomson, J.J.: Acts and Other Events. Contemporary Philosophy Series. Cornell University Press, Ithaca (1977)Google Scholar
  8. 8.
    Sherry, K.J.: BPMN Pocket Reference: A Practical Guide To The International Business Process Model And Notation Standard BPMN Version 2.0. CreateSpace Independent Publishing Platform, Seattle (2012)Google Scholar
  9. 9.
    Baxter, G., Sommerville, I.: Socio-technical systems: from design methods to systems engineering. Interact. Comput. 23(1), 4–17 (2011)CrossRefGoogle Scholar
  10. 10.
    Baxter, G.: Socio-technical systems. In: LSCITS Socio-Technical Systems Engineering Handbook. University of St Andrews (2011)Google Scholar
  11. 11.
    van Dam, K.H., Nikloic, I., Lukszo, Z. (eds.): Agent-Based Modelling of Socio-Technical Systems, 1st edn. Springer, Netherlands (2013)Google Scholar
  12. 12.
    Charitoudi, K., Blyth, A.: A socio-technical approach to cyber risk management and impact assessment. J. Inf. Secur. 4(1), 33–41 (2013)CrossRefGoogle Scholar
  13. 13.
    Goldblatt, R.: A Kripke-Joyal semantics for noncommuntative logic in quantales. In: Advances in Modal Logic, vol. 6 (2006)Google Scholar
  14. 14.
    Stirling, C.: Modal And Temporal Properties of Processes. Springer, New York (2001)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Information Security Research Group (ISRG), Faculty of Computing, Engineering and ScienceUniversity of South Wales-PontypriddPontypriddUK

Personalised recommendations