Advertisement

Design for the Values of Accountability and Transparency

  • Joris HulstijnEmail author
  • Brigitte Burgemeestre
Living reference work entry

Abstract

If an organization is to be held accountable for its actions, the public need to know what happened. Organizations must therefore “open up” and provide evidence of performance to stakeholders, such as principals who have delegated tasks, employees, suppliers or clients, or regulators overseeing compliance. The social values of transparency – the tendency to be open in communication – and accountability – providing evidence of past actions – are crucial in this respect.

More and more aspects of the internal control systems, policies, and procedures to gather evidence of organizational performance are implemented by information systems. Business processes are executed and supported by software applications, which therefore effectively shape the behavior of the organization. Such applications are designed, unlike practices which generally grow. Therefore, it makes sense to take the core values of accountability and transparency explicitly into account during system development.

In this chapter we provide an account of the way in which transparency and accountability can be built into the design of business processes, internal controls, and specifically the software applications to support them. We propose to make trade-offs concerning core values explicit, using an approach called value-based argumentation. The approach is illustrated by a case study of the cooperation between providers of accounting software and the Dutch Tax and Customs Authority to develop a certificate, in order to improve the reliability of accounting software. Widespread adoption of the certificate is expected to stimulate accountability and indeed transparency in the retail sector.

Although the approach is developed and tested for designing software, the idea that trade-offs concerning core values can be made explicit by means of a critical dialogue is generic. We believe that any engineering discipline, like civil engineering, water management, or cyber security, could benefit from such a systematic approach to debating core values.

Keywords

Value sensitive design Accountability Transparency 

Introduction

From the beginning of the 1990s, there has been an audit explosion: formal audit and evaluation mechanisms have become common practice in a wide variety of contexts (Power 1997). Not only in the financial domain where it originates but also in other domains like healthcare or education, professionals are now expected to be accountable to the general public by means of extensive reporting schemes. The advance of auditing is closely related to the increasing importance of the social values accountability, providing evidence to justify past actions to others, and transparency – the tendency to be open in communication. In this chapter we would like to investigate to what extent these values affect systems design. Is it possible to design a system taking accountability and transparency into account? In brief, can we design for accountability or for transparency?

Accountability and transparency are properties of a person or organization. When used to describe a system, their meaning is derived. They are iconic notions, with positive connotations. For example, accountability is used as a kind of synonym to good governance (Dubnick 2003). Transparency of government has become a goal in its own right, witness, for example, in the movement for open data (Zuiderwijk and Janssen 2014). Taken in this way, these notions are used as evaluative, not as analytical, concepts. Relative to what or whom can we evaluate these notions? Clearly their meaning depends on the context, on a relationship with others: “Accountability can be defined as a social relationship in which an actor feels an obligation to explain and to justify his or her conduct to some significant other” (Day and Klein 1987). According to Bovens (2005, 2007), an accountability relationship contains a number of components: the actor can be a person or agency. Then there is some significant other, who can be a person or agency but can also be a more abstract entity, such as God or “the general public.” Bovens calls this the forum. The relationship develops in three stages. First, the actor must feel obliged to inform the forum about its conduct, including justifications in case of failure. The obligation may be both formal, i.e., required by law or by contract, or informal and self-imposed, for instance, because the actor is dependent on the forum. Second, the information may be a reason for the forum to interrogate the actor, ask for explanations, and debate the adequacy of the conduct. Third, the forum passes judgment on the actor’s conduct. A negative judgment often leads to some kind of sanction; again this can be both formal and informal. This means that an accountability relation should provide room for discussion: it is not a one-way stream of reports but rather a dialogue. The dialogue is facilitated by a transparent organization and by an inquisitive forum challenging the outcomes. Moreover, accountability is not without consequences. There is a judgment that depends on it.

We can also analyze accountability as the counterpart of responsibility (Van De Poel 2011). When I am responsible for my actions now, I may be held accountable later. I need to collect evidence, so that I can justify my decisions. One could say that this focus on evidence collecting has turned a moral topic into a rather more administrative or technical one.

In this chapter we will focus on a specific application of accountability in which evidence plays an important role, namely, regulatory compliance. Companies are accountable for their conduct to the general public, represented by a regulator (e.g., environmental inspection agency, tax administration). In corporate regulation, many laws nowadays involve some form of self-regulation or co-regulation (Ayres and Braithwaite 1992; Black 2002). Regulators increasingly rely on the efforts of the companies being regulated. Companies must determine how the regulations apply to their business, set up a system of controls to ensure compliance, monitor the effectiveness of these controls to establish evidence, and subsequently provide accountability reports. To ensure reliability, stakeholders demand certain guarantees in the way evidence is being generated. These measures are called internal controls (Coso 1992). Consider, for example, segregation of duties, reliable cash registers, automated checks, access control, or logging and monitoring.

Crucially, such internal control measures are being designed. An adequate design of the internal controls is a prerequisite for this kind of accountability. But it is not enough. Evidence is being generated in a corporate environment. People are often in the position to circumvent the controls or manipulate their outcomes (Merchant 1998). Whether they choose to do so depends on their values and beliefs, which are partly determined by the corporate culture (Hofstede et al. 1990). In particular the value of transparency is important here: do we really want to reveal these facts about our conduct? Corporate culture cannot be designed; it can only be stimulated. For instance, transparency can be facilitated by systems that make it easier rather than harder to access and share information.

When an auditor or inspector is asked to verify information disclosed by a company, he or she must rely on the controls built into the procedures, processes, and information systems. As a consequence, there is an increased need for methods to support the systematic design of control measures and secure information systems (Breu et al. 2008). Such methods are developed in the field of requirements engineering. There is a whole set of requirements engineering techniques specifically targeted to make systems secure (Dubois and Mouratidis 2010). However, these methods focus on the design of controls from a risk-based point of view: which threats and vulnerabilities are addressed by which measures? They pay little attention to the facilitation of the underlying values. In fact, security may even be harmful to transparency. Security measures often make it harder to share information.

In the area of information security, the notion of accountability has often been confused with traceability: the property of being able to trace all actions to an identifiable person in a specific role. However, traceability is neither a necessary nor sufficient condition to establish accountability (Chopra and Singh 2014). So accountability and transparency need to be considered in their own right. That suggests the following research question:

How can we make sure that the values of accountability and transparency are built into the design of information systems?

There are design methodologies that aim to incorporate nonfunctional requirements or values like security (Fabian et al. 2010), accountability (Eriksén 2002), or even transparency (Leite and Cappelli 2010) into the development process. However, these values compete with other values, such as profitability, secrecy, and human weaknesses such as shame. How can such conflicts be resolved? Following the recommendation of Bovens (2007), we take the notion of a dialogue very seriously. The general idea is that design is a matter of trade-offs (Simon 1996). Core values, like transparency and accountability, need to be defended against other values and tendencies, such as efficiency, profitability, tradition, or secrecy. That means that design choices are justified in a process of deliberation and argumentation among a forum of stakeholders. Our working hypothesis is that such a critical dialogue can be facilitated by a technique called value-based argumentation (Atkinson and Bench-Capon 2007; Atkinson et al. 2006; Burgemeestre et al. 2011, 2013). One stakeholder proposes a design. Other stakeholders can challenge the underlying assumptions and ask for clarification. In this way, more information about the design is revealed. Ultimately, the forum passes judgment. The resulting scrutiny should provide better arguments and subsequently improve the quality and transparency of the design process itself.

In order to illustrate the dialogue approach and demonstrate its adequacy in handling design trade-offs, we describe a case study. The case study is about a long-term cooperation between a number of software providers and the Netherlands Tax Administration. These partners developed a set of standards to promote reliability of accounting software, make sure that crucial controls are embedded in the software, and thus ensure reliability of financial recording. This is crucial for tax audits, but it also affects trustworthiness of businesses in general. By initiating a standard, the government is trying to stimulate transparency and accountability in the entire business community.

Although we focus here on the design of accounting software, the use of value-based argumentation is generic. Values are crucially related to the identity of people or groups (Perelman 1980). They reveal something about who you are or where you have come from. This explains why debates about values run deep. We believe that other engineering disciplines, which must deal with challenges like safety, sustainability, or environmental hazards, could also benefit from such a systematic approach to debating core values.

The remainder of the chapter is organized as follows: the following section explains the notions of accountability and transparency in more detail. Then we develop the idea that one can in fact design for accountability, and possibly also for transparency. After that the case study is discussed. The chapter ends with conclusions and directions for further research.

Accountability and Transparency

Auditing, Agency Theory, and Internal Controls

As we stated in the introduction, we focus on accountability and transparency in the context of regulatory compliance. Companies collect evidence of behavior and produce reports. To verify these reports, auditors or inspectors are called in. Therefore, auditing theory is relevant here; see textbooks like Knechel et al. (2007). “Auditing is the systematic process of objectively obtaining and evaluating evidence regarding assertions about economic activities and events to ascertain the degree of correspondence between the assertions and established criteria, and communicate the results to interested users” (American Accounting Association 1972). Roughly, auditing is testing to a norm (Fig. 1). What is being tested is a statement or assertion made by management about some object, for instance, the accuracy and completeness of financial results, the reliability of a computer system, or the compliance of a process. The statement is tested against evidence, which must be independently collected. The testing takes place according to norms or standards. In accounting, standards like the Generally Accepted Accounting Principles (GAAP) are well established, but in other fields, like regulatory compliance, the standards often have to be developed, as the original laws are written as open norms. Open norms are relatively general legal principles or goals to be achieved; they still have to be made specific for the situation in which they are applied (Korobkin 2000; Westerman 2009). Often, determining the boundaries leads to a kind of dialogue among stakeholders. Black (2002) calls such dialogues regulatory conversations.
Fig. 1

Typical audit setting

Much recent thinking about accountability and compliance monitoring has come to be dominated by a rather mechanical logic of auditability (Power 2009). This way of thinking is characterized by a bureaucratic demand for evidence and by reference models like COSO and COBIT that try to put reality into a rational mold of control objectives and measures to mitigate risks. The plan-do-check-act loop (Deming 1986) that was originally developed for improving quality in the automobile industry is widely adopted to make organizations learn and improve their risk management efforts (Power 2007). This Deming cycle is essentially a feedback-control loop, which presupposes that an organization is run as a machine, with levers and dials. Although corporate reality rarely fits these molds, accountability has sometimes degenerated into a box-ticking affair.

In the typical audit relation, we identify three actors: management, stakeholders, and auditors (Fig. 1). Management is accountable for its actions to the owners or shareholders of a company, who want to see a return on investment. It is also accountable to other members of society, like employees, suppliers, or others, who are dependent on the company. Every year management prepares financial statements about the results of the company: profit and loss, assets and expectations, and compliance. Auditors verify these statements and provide assurance as to their reliability. In this case, the accountability derives from a delegation of tasks: shareholders have delegated executive tasks to management. The resulting loss of control is remedied by accountability reporting.

This type of accountability relationship is typically addressed by agency theory (Eisenhardt 1989). One party, the principal, delegates work to another party, the agent. The agent must give an account of his or her actions, because the principal is distant and unable to verify or control the agent’s actions directly (Flint 1988). In addition, the agent’s incentives may conflict with those of the principal, so the agent may withhold evidence of executing the task. The resulting information asymmetry is one of the focal points of agency theory. The principal runs a risk, for two reasons: (i) she is dependent on the agent for executing the action, but does not have a way of directly controlling the agent, and (ii) she is dependent on the agent for providing evidence of execution. To overcome these risks, the principal will typically demand guarantees in the way information is being generated: internal controls.

Similar examples can also be found in other domains. Consider forms of self-regulation (Rees 1988). A company states that its products adhere to health and safety regulations. To keep its license, the company must report on performance indicators to demonstrate that it is meeting its objectives. Inspectors regularly evaluate these claims. Here, the general public acts as principal: it has delegated meeting part of its concerns (health and safety) to the company itself. The inspector must provide assurance. This form of self-regulation creates information risks similar to (i) and (ii) above. Such regulatory supervision schemes are discussed in Burgemeestre, et al. (2011).

In agency theory , information systems are seen as a means both to control behavior and to inform the principal about what the agent is doing (Eisenhardt 1989). Control measures are often embedded in information systems. Consider, for example, application controls, which are built into financial applications and automatically prevent common mistakes in transactions, like entering a negative price or a nonexistent product code. Consider also controls built into a point-of-sales system at a supermarket, which make it hard or impossible to delete the initial recordings of a sales event. Or consider the automatic maintenance of segregation of duties, which ought to make sure that some transactions or data files can only be accessed or manipulated by people in specific roles. All of these measures are implemented in or at least facilitated by information systems. Important facilitating functions, needed for reliable information provisioning and control, are baseline security, logging and monitoring, access control, and authorizations management (Romney and Steinbart 2006).

Accountability

Even though accountability has become a main issue in today’s audit society (Power 1997) and information systems are crucial for the collection of evidence, the topic has received limited attention in the computer science community. When accountability is discussed in computer science, it often focuses on the human accountability for some computer program and not on the accountability achieved or supported by the software itself. For example, it is debated whether software agents or their human owners can be held morally accountable for their actions (Friedman et al. 1992; Heckman and Roetter 1999). Can programmers be held accountable for problems that occur due to errors in the software they produced (Johnson and Mulvey 1995; Nissenbaum 1994)?

One of the few works that does describe how accountability can be incorporated into an information system is the work of Friedman et al. (2006) on value sensitive design . They define accountability as the set of properties of an information system that ensure that the actions of a person, group of people, or institution may be traced uniquely to the person, people, or institution (Friedman et al. 2006). So under this view, accountability is ensured by an audit trail. In the context of regulatory compliance, also Breaux and Anton (2008) consider a software system to be accountable if “for every permissible and non-permissible behaviour, there is a clear line of traceability from the exhibited behaviour to the software artefacts that contribute to this behaviour and the regulations that govern this behaviour” (p. 12). In their view, a system is accountable when it is able to demonstrate which regulatory rules apply to which transaction and thus produce a corresponding audit trail.

However, Chopra and Singh (2014) argue that although such traceability is an important mechanism for holding someone accountable, it is neither necessary nor sufficient. First, traceability of actions is not always necessary. One can also use the outcomes of a process, rather than the way it was carried out, to hold someone accountable. Compare the difference between outcome control and behavioral control (Eisenhardt 1985). Second, traceability is not enough. What is needed in addition is a mechanism of holding the agent accountable: someone must evaluate the audit trail and confront the agent with possible deviations. This is precisely the function of the forum (Bovens 2007).

Who plays the role of the forum? In trade relationships, we often find actors with countervailing interests (e.g., buyer and seller) who can hold each other accountable. This is also why segregation of duties is considered so important: it creates independent sources of evidence, which can be used for cross verification. In bureaucracies, often an artificial opposition is created, for instance, between the front office (help client) and the back office (assess conformance to policies). Also the installation of a dedicated risk management function, separated from the business, can be seen in this respect. When effective, such a risk function should provide a counterforce against the tendency of the business to take on too many risks; see, e.g., (Coso 2004; Power et al. 2013). The resulting critical dialogue between the business and risk function should lead to lower risks and motivated controls.

For a computer system, the human user or system administrator is asked to take up the accountability function. He or she should evaluate the log files. Does it work as expected? We can also imagine a situation in which this function is taken up by a software module that automatically evaluates the audit trail at frequent intervals; see, e.g., Continuous Control Monitoring (Alles et al. 2006; Kuhn and Sutton 2010). It remains an open question whether we can really speak about accountability or “assurance” in this case or whether it is only repeated verification dressed up in an accountability metaphor.

The principle of accountability states that an agent can be held accountable for the consequences of his or her actions or projects (Turilli and Floridi 2009). In that sense, accountability is similar to the moral notion of responsibility and the legal notion of liability. Based on the philosophical and legal literature (Duff 2007; Hart 1968), it is possible to identify a number of necessary conditions for attributing accountability. Accountability can be set apart from blame and from liability, which are all related to the umbrella notion of responsibility. Blameworthiness is the stronger notion. Accountability involves the obligation to justify one’s actions. But such answerability does not necessarily imply blame. Ignorance or coercion can be valid excuses. Liability narrows the notion to a legal perspective. One can be blamed but not liable, for instance, when a contract explicitly excludes liability. Conversely, one can be liable for damages but not blamed, for instance, when everything was done to prevent disaster. To summarize, according to Van De Poel (2011), agents are only accountable for their actions in case each of the following conditions are met:
  1. (a)

    Capacity. The agent must be able to act responsibly. This includes the difficult issue of “free will.” Conversely, when the agent is under pressure or coerced into doing something, or when he or she is mentally or physically disabled to act as expected, he or she is no longer held accountable.

     
  2. (b)

    Wrongdoing. Something must be going wrong. This means that the agent fails to live up to its responsibilities to avoid some undesired condition X. Or, under a different account, it means that the agent transgressed some duty D.

     
  3. (c)

    Causality. The agent is instrumental in causing some undesired condition X. Either this means that the agent is generally able to influence occurrence of X or that some transgression of duty D causes X to occur.

     

Accountability is essentially a property of people or organizations. Does it make sense to say that one can “design for accountability?” And what type of artifact is being designed in that case? For the notion to make sense, we must assume that the behavior of people in organizations can – to some extend – be designed. This is the topic of management control (Merchant 1998): how to design an organization and its procedures in such a way that its employees’ behavior can be “controlled?”

Nowadays, much of the organizational roles, workflows, and procedures that restrict people’s behavior in organizations are embedded and facilitated by information systems. They are designed. But also the control measures themselves are being designed, as are the ways of recording and processing information about behavior. These “measures of internal control” are the topic of this chapter. The definition runs as follows: “Internal control is a process, effected by an entity’s board of directors, management and other personnel, designed to provide reasonable assurance regarding the achievement of objectives in the following categories: (i) Effectiveness and efficiency of operations, (ii) Reliability of financial reporting, and (iii) Compliance with applicable laws and regulations” (Coso 1992, p. 13). The notion of internal control originates in financial accounting, but based on the original COSO report and the later COSO ERM framework for risk management (Coso 2004), it has become widespread. Power (1997, 2007) has argued convincingly that this accounting perspective often produces a rather mechanistic notion of risk avoidance and control. For such mechanisms, it certainly makes sense to ask to what extent the design incorporates the values of accountability and transparency. After all, sometimes it does go wrong. In many of the accounting scandals of the 1990s, such as Enron or Parmalat, or in the demise of Lehman brothers in 2008, people were found to hide behind a formalistic approach to guidelines and procedures, instead of taking responsibility and assessing the real risks and reporting on them (Satava et al. 2006). One could argue that in such cases, the governance structure of the risk management procedures was not adequate. It was badly designed, because it only created traceability rather than facilitating a critical dialogue.

To summarize, we can say that it does make sense to “design for accountability.” What is being designed, in that case, is the system of internal control measures, as well as business processes and information systems in which these controls are embedded. Obviously, other aspects of organizations, such as corporate culture (Hofstede, et al. 1990) or more specifically the risk culture (Power et al. 2013), also affect behavior. But we cannot properly say that these “soft” aspects of the control environment are designed. Instead they develop over time. They can at best be altered or facilitated.

Who is being held accountable? If we look at individual employees, designing for accountability would mean that conditions (a)–(c) hold for individuals. That would mean that employees have real responsibilities and do not just act to follow procedures. It also means that responsibilities and duties of different people are clearly delineated and that it is known who did what and what the consequences are of these actions. This last aspect involves an audit trail: all effects of actions must be traceable to an individual person. But it is crucial that at some point someone uses the audit trail to evaluate the behavior and confront employees with the consequences of their actions, be it good or bad.

If we look instead at accountability of the entire organization, we get a different picture. First, it means that evidence is collected to justify decisions: minutes of board meetings, requests and orders, or log files of transactions. Also the conditions under which the evidence is generated, processed, and stored matter. Information integrity must be assured, before it can function as reliable evidence. Integrity of information involves the properties of correctness (records correspond to reality), completeness (all relevant aspects of reality are recorded), timeliness (available in time for the purpose at hand), and validity (processed according to policies and procedures) (Boritz 2005). Important conditions to ensure integrity of information are good IT management, segregation of duties, access control, and basic security. Second, it means that some internal or external opposition is created, for instance, from a separate risk function or internal audit function reporting directly to the board of directors or from the external auditor or regulator. They must use this evidence to hold the management accountable. Typically, this is the hardest part (Power et al. 2013).

Transparency

For transparency, a similar discussion can be started. If the notion applies to organizations, what does it mean to say that one is “designing for transparency?” And if it also applies to systems, what kinds of systems are being designed, when one is designing for transparency?

A transparent organization is one that has a tendency to be open in communication, unless there are good reasons not to. Transparency is the opposite of secrecy. Generally, transparency is believed to be a good thing. This is apparent, for example, in the movement for “open data,” where government agencies are urged to “open up” the data they control (Zuiderwijk and Janssen 2014).

What is being designed? In the case of open data, it is about the policies and guidelines according to which officials can decide whether some specific database may be “opened up” for the public. A special example of such a policy is Executive Order 13526 about Classified National Security Information, which President Obama signed in the beginning of his first term. The purpose of this order was to declassify “old” government secrets and make them available to researchers and historians. In practice, such declassification programs face severe difficulties (Aftergood 2009). Government agencies like the CIA are often reluctant to declassify, for several reasons. In addition to genuine national security interests, Aftergood talks about bureaucratic secrecy. This is the tendency of bureaucracies to collect secrets, more than strictly needed. Apparently a bureaucracy creates incentives for officials to collect evidence but never to discharge. A third reason is “political secrecy,” the tendency to use classification power for political purposes. “It exploits the generally accepted legitimacy of genuine national security interests in order to advance a self-serving agenda, to evade controversy, or to thwart accountability” (Aftergood 2009, p. 403).

Transparency as a property of organizations is impossible to design. As we have seen, transparency appears to be more about corporate culture and accepted practices than about formal procedures. Nevertheless, transparency can be facilitated, for instance, by providing infrastructures that make it easier to share and evaluate data.

What about transparency as a system property? Transparency does occur as a property of information systems, in several forms. Making explicit and comprehensible which rules are implemented in the system and how the information is produced is called procedural transparency (Weber 2008). An example of procedural transparency can be found in the edit history of Wikipedia and other Wikis. To increase reliability of information, anyone can examine the edit history and see who has edited, deleted, or added information (Suh et al. 2008). This should increase trust in the reliability of a source.

In electronic government research, transparency is also concerned with the (secured) disclosure of information to empower citizens to make better informed choices (Leite and Cappelli 2010). Information transparency can then be defined as “the degree to which information is available to outsiders that enables them to have informed voice in decisions and/or to assess the decisions made by insiders” (Florini 2007, p. 5).

In the development of computational models or computer simulations, transparency is concerned with making explicit the assumptions or values that are built into the model. To support users of the models to make informed decisions and develop an appropriate level of confidence in its output, it is important to explain their working (Fleischmann and Wallace 2009; Friedman et al. 2006). In that case, transparency is defined as the capacity of a model to be understood by the users of the model (Fleischmann and Wallace 2009). To ensure transparency, the model’s assumptions about reality and values should be made explicit and testable.

Leite and Capelli (2010) discuss work on software transparency: all functions are disclosed to users. In human-computer interaction, an interface design is called transparent, when it supports users in forming a correct mental representation of the working of the system. For example, the metaphor of a trash can on a desktop suggest that deleted files can be retrieved, until the trash is emptied. According to Norman (1998), a good interface should make the supporting computer technology disappear; what remains is mere functionality. This should make it easier to understand, learn, and use computer applications.

Transparency is also mentioned in the debate on open-source code: making the code available is the ultimate form of transparency. The underlying motivation is that scrutiny by the public will help to detect possible flaws and eventually increase quality of software. The same holds true for secure systems engineering. There is no such thing as “security by obscurity.” Rather, security should be provided by the algorithms or the strength of the keys themselves (Schneier 2000). By opening up the source code of the algorithms, they can be tested and improved, if necessary.

Summarizing, depending on the context and use of the system, various kinds of transparency have been addressed in information systems research. From the perspective of those who gain access to information, transparency depends on factors such as the availability of information, its comprehensibility, accessibility, and how it supports the user’s decision-making process (Turilli and Floridi 2009). When related to accountability, both transparency of information and procedural transparency are relevant. To enable a principal to trust and use information that is produced by a system on behalf of the agent, the way the information is generated must be visible: a black box would not suffice.

In general, information and communication technology is argued to facilitate accountability and transparency (Bannister and Connolly 2011). However, when accountability and transparency are improperly implemented, results can be disappointing and even contradictory. In the next section, we describe some undesirable consequences of designing for accountability and transparency.

Controversy and Threats

Accountability and transparency are facilitated by ICT. ICT makes it easier to share and spread information within corporations and among their stakeholders. Vaccaro and Madsen (2009) argue that the use of ICT has realized social modifications and transformations in stakeholder relationships. Internet-based technologies such as e-mail, corporate websites, blogs, and online communities support intensive two-way information exchange between a firm and its stakeholders. Experiences acquired in this way can be used to modify the business practice and to make organizations more transparent, accountable, and socially responsible. Especially in business research, putting more information in the hands of stakeholders is assumed to force corporations to become more honest, fair, and accountable. But ICT can do little for benevolence and nothing for openness or empowerment if those in power do not want these things to happen (Bannister and Connolly 2011). Or as Elia (2009, p. 147) puts it, “technology may enable transparency but technology cannot guide it.” Transparency through ICT increases trust because it leaves less to trust (Bannister and Connolly 2011). However, in the context of e-government, trust in public processes cannot be delivered by technology alone; the basic structures of government need to be changed too (Bannister and Connolly 2011). Transparency stems from social interaction, and it is therefore a political, not a technical, issue (Menéndez-Viso 2009).

There are all kinds of political issues associated with transparency and accountability. For example, in the case of corporate governance, information may be “strategically disclosed.” Corporations are not always willing to disclose information that will hold them accountable; often positive aspects are emphasized (Hess 2007). Elia subscribes these findings and argues that “information technology rarely creates access to information that a company has not vetted for public consumption” (Elia 2009, p. 148). Individuals also have strong motives against strategically disclosing information about their behavior (Florini 2007). One is that secrecy provides some insulation against being accused of making a mistake. It is much easier for an official to parry criticism when incriminating information remains secret. A second incentive is that secrecy provides the opportunity to uphold relationships with special interests. It is more difficult to maintain profitable relationships when financial transactions and the decision-making process are transparent.

There are circumstances under which secrecy is understandable and defendable. Bok (1983) lists the protection of personal identity, protection of plans before they have had the chance to be executed, and protection of property as legitimate reasons for justifying secrecy or, at least, for the right to control the choice between openness and secrecy. But even in a democracy, the right to make and keep things hidden is often misused to cover up political power struggles or bureaucratic failures, as we have seen in the example of Obama’s declassification act (Aftergood 2009). In general, transparency is therefore considered to be preferred.

So if warranted transparency is a good thing, how should it be exercised? Making all data accessible at once is not very effective. Consider the Wikileaks cases, which required journalists to interpret them before they became “news.” In fact, an excess of information makes accountability harder. It limits transparency, as it may be easy to drown someone in reports and hide behind an overwhelming amount of data presented through dazzling technological means (Menéndez-Viso 2009). Accountability requires useful and relevant information, not more information. To provide relevant information, it is particularly important to understand the information needs of the systems’ users (Bannister and Connolly 2011). For example, shareholders want to be confident about the quality of the systems, processes, and competencies that deliver the information in the accountability report and underpin the organization’s performance and commitments (Dando and Swift 2003).

Besides selective disclosure or over-disclosure, also false information may be distributed. The anonymity of the Internet is exploited to intentionally or unintentionally post erroneous information to manipulate the public opinion (Vaccaro and Madsen 2009). Consider, for example, restaurant reviews, which can be artificially “lifted.” Even credible sources and full transparency may produce incomplete, misleading, or untrue information. Therefore, it is essential to also acknowledge expertise, verifiability, and credibility (Santana and Wood 2009). But even when there is a genuine intention to achieve accountability, full and transparent disclosure of information may not be achieved because of the costs involved. Collecting, organizing, and disseminating information requires time, effort, and money. Agents will reveal information up to the point where the benefit from disclosure equals the costs. Typically this point is reached before full disclosure (Vishwanath and Kaufmann 2001).

To summarize, transparency can be too much of a good thing. Transparency mediated by information and communication technology runs a risk of being shallow, arbitrary, and biased toward corporate interests (Elia 2009). In this way transparency only creates apparent accountability. Partial or superficial transparency is considered to be more damaging than none at all (Elia 2009; Hess 2007). Providing limited transparency may be a technique to avoid disclosures which are truly relevant to stakeholder interests (Elia 2009). They may divert attention away from more effective means of accountability (Hess 2007).

Designing for Accountability and Transparency

What does it mean to design for accountability? Or what does it mean to design for transparency? In the previous section, we discussed these questions in general. We will now make it specific in order to demonstrate – by example – that answering these questions makes sense. In addition we will explain some of the design issues to be addressed.

What kinds of systems can be designed for accountability or transparency? Here we will assume that we are designing and developing an information system, in its broadest (socio-technical) sense: a collection of devices, people, and procedures for collecting, storing, processing, retrieving, and disseminating information, for a particular purpose. In case of accountability, we are designing essentially the collection and storage of information about the actions of the agent, which may then be used as evidence in an accountability relation, i.e., for the purpose of providing justification to the principal. In case of transparency, we are designing essentially a dissemination policy. We assume the evidence is there, but unless there are mechanisms to facilitate dissemination, it will be kept private. As we have seen, transparency may be restricted by legitimate secrets, but this requires an explicit classification decision.

Consider an agent, which is accountable to a forum. As announced in the introduction, we focus here on compliance reporting. The agent is a company, and the forum consists of a regulator on behalf of the general public. We will now present a simplified architecture of a reporting system that could facilitate such an accountability relation. What we are monitoring are the primary processes of the agent but also whether the agent is “in control”: is the agent achieving its control objectives and if not, is it adjusting the process? Therefore, the internal control processes need to be monitored too. All of this generates evidence, which is stored, but also compiled into an accountability report and sent to the forum. The forum will then evaluate the report and pass judgment (Fig. 2).
Fig. 2

Simplified compliance reporting architecture

Based on this reporting architecture, we can list a number of requirements. First we list requirements that are related to accountability. Next we will list some requirements specifically about transparency:
  1. (A1)

    Recording evidence. First we have to make sure there is reliable data at all. This concerns the initial point of recording of behavior (Blokdijk et al. 1995). Data should be recorded from an independent source or if that is impossible, automatically. For raw data to become evidence, it must be interpreted as representing specific events or propositions, which are meaningful in the legal context. It should be evidence of something. Moreover, the evidence should be recorded in a read-only storage device that allows for easy retrieval. The read-only device should make sure the evidence is not lost or manipulated.

     
  2. (A2)

    Segregation of duties. The objectivity of the evidence is based on the principle of segregation of duties. According to this age-old accounting principle, the organizational roles of authorizing a decision, executing it, and recording its effects should be executed by independent actors (Romney and Steinbart 2006; Starreveld et al. 1994). Even stronger is the principle of countervailing interests. Parties to a commercial transaction generally have countervailing interests. Therefore, they will scrutinize the accuracy and completeness of data in commercial trade documents. For instance, buyer and seller will cross verify the price in an invoice. In government, countervailing interests are artificially created, for instance, between front office and back office.

     
  3. (A3)

    How much data should be recorded? Generally, what should be recorded is an audit trail: a trace of decisions, which allows others to recreate the behavior after the fact. If too much is recorded, auditors will drown in details and relevant material will not be found. Moreover, there may be performance limitations. If too little is recorded, the data is useless as evidence. The difficulty is that future information needs are unknown. The trade-off can be solved by a risk analysis, which should involve all parties interested in accountability: risk officer, compliance officer, business representative, and system administrator. They should evaluate the risk of missing evidence against performance losses.

     
  4. (A4)

    What should be recorded? Generally we are interested in the effectiveness of measures, in achieving meeting specific control objectives, often based on a more general regulation or guideline. There are several ways of defining controls, based on different performance indicators (Eisenhardt 1985). We can record evidence of the actual performance, if that can be measured at all, we can record evidence of the actions that were undertaken, or we can record evidence of the necessary side conditions, such as qualified staff. Consider, for example, the monitoring of hygiene standards in the food industry. Performance can be measured directly by taking samples of the E. coli. The amount of bacteria per sample is considered a good indicator of relative hygiene. We can also measure the cleaning actions, whether they are indeed executed according to schedule. Finally, we can verify if all staff has followed the hygiene-at-work course (Fig. 3).

     
  5. (A5)

    Being in control. Instead of collecting evidence of the primary processes, in many cases it makes sense to look at the maturity level of the internal controls. Here we can test whether the company is actually “in control.” For example, we can verify that whenever there are incidents, there is a follow-up, and the effectiveness of the follow-up is tested. This kind of thinking is very common in quality management and risk management (Coso 2004). A typical representative of this approach is the plan-do check-act cycle (Deming 1986).

     
  6. (A6)

    Critical Forum. As we argued above, evidence of past actions is not enough to bring about accountability. Someone must critically examine the evidence and confront those who are responsible with the consequences of their actions. This is the role of the forum (Bovens 2007). Organizing a critical opposition is key to accountability. This aspect is part of the governance structure of an organization. For example, in risk management, a separate risk function is created, which reports directly to the board of directors to ensure independence of the business, whom it needs to challenge (Iia 2013).

     
Fig. 3

Determining kinds of evidence

The following requirements have to do with transparency, namely, with policies determining the dissemination of accountability evidence:
  1. (T1)

    Reliable Reporting. An accountability report needs to be accurate and complete. That means it must be compiled on the basis of all the evidence generated in the processes discussed above and only on the basis of such evidence. As they perform a communicative function for some specific audience, reports should be coherent, informative, true, relevant, and not over-informative with respect to the information needs of the audience; compare the maxims of Grice (1975).

     
  2. (T2)

    Audience. All reports are written for a specific audience, in this case the forum. Generally reports are compiled according to standards, which should be based on the information needs of the forum. Such information needs are again based on the decision criteria for the judgment that depends on the report. In practice this means that the one-size-fits-all approach to annual financial statements, which are supposed to serve the general public, is inappropriate. Different stakeholders have different information needs. Modern technology like XBRL makes it relatively easy to differentiate and generate different reports on the basis of the same evidence (Debreceny et al. 2009).

     
  3. (T3)

    Statement. Accountability reports are more than a collection of evidence. They are compiled and accompanied by a statement of the management that the report is considered an accurate and complete representation of reality. This statement generates a commitment to the truth of the information; it also generates a sense of responsibility and in many cases also legal liability. For example, under the Sarbanes-Oxley Act, managers are personally liable for misstatements. This is supposed to have a deterrence effect.

     

The role of the external auditor is interesting in this respect. Financial statements are directed to the general public, including shareholders. The accountant does not play the role of “significant other” but merely provides assurance. The accountant performs an investigation of the records on which the statements are based, in order to attest to the accuracy and completeness of the financial statements, which are compiled by the company itself (Knechel, et al. 2007). In the architecture above, that means that the process “accountability” has been partly delegated to an independent agency.

Another interesting remark concerns the nature of evidence. Typically, depending on the context, some records or logs are said to “count as” evidence in a legal or institutional sense. This counts-as relation has been studied by John Searle (1995). Using such constitutive rules, we are creating our complex social environment, in which many facts are not brute facts but rather institutional facts. Generally, counts-as rules have the following form: in institutional context S, act or event X counts as institutional fact or event Y. Consider, for example, a receipt. When I buy something, the paper receipt I am given counts as legal evidence that I have indeed bought and received the product for that price. Searle’s theory shows that evidence is essentially a social construction. It only makes sense within a specific institutional context, in our case defined by the forum. For example, the auditing term “test of operating effectiveness” has a different and much more limited meaning than the general meaning of the term “effectiveness.” Operating effectiveness means that a control has been operational for the full duration of the period under investigation. It is not concerned with the effectiveness of the measure, whether it was successful in achieving its goals. Such conceptual differences between auditor and forum may hamper transparency.

Value-Based Argumentation

As we have seen in the previous two sections, designing is essentially a matter of solving trade-offs. Consider issues such as: how much evidence? What is the cost of compliance? What is legitimate secrecy? We believe that such trade-offs can only be solved in a dialogue between stakeholders. Generally, different stakeholders will represent different interests. So it is important to have all parties present during such design workshops. There are many techniques to facilitate decision making in groups. Consider for example, the Delphi method, which tries to gather and evaluate innovative ideas from a group of experts. However, here we are dealing with a specific decision task: designing a system. Not just opinions matter but also the technical feasibility of the resulting system. Generally, the dialogue will be a mixture of technical arguments about facts, effectiveness of measures or feasibility, and more motivational arguments about the relative desirability of specific objectives and underlying social values.

Argumentation is an interactive process in which agents make assertions (claims) about a certain topic, which support or attack a certain conclusion or support or attack the assertions of the opponent. There is no single truth; what matters is the justification of the conclusion (Walton 1996). In previous research, we have developed value-based argumentation theory, a dialogue technique to make the values underlying decisions about system requirements explicit (Burgemeestre et al. 2011, 2013). The approach is based on an earlier approach to value-based argumentation used for practical reasoning (Atkinson and Bench-Capon 2007). We believe that the structured nature of the argumentation framework with its claims and counterattacks closely resembles an audit process as we encounter it in practice. In the remainder, we will give a brief exposition of the approach, enough to show that there are indeed techniques that can help facilitate design processes, which take values like accountability and transparency into account.

Walton (1996) organizes practical reasoning in terms of argument schemes and critical questions. An argument scheme presents an initial assertion in favor of its conclusion. Now it is up to the opponent to try and disprove the assertion. The opponent can challenge the claims of the proponent by asking so-called critical questions. Originally, the argumentation scheme uses means-end reasoning: what kinds of actions should we perform in order to reach our goals? This already captures debates about effectiveness and about alternatives. But how are goals justified? The answer is by social values. Perelman (1980) indicates that social values can account for the fact that people may disagree upon an issue even though it would seem to be rational. In the business world, consider values like profit, safety, or quality. Such values are embedded in the corporate culture (Hofstede et al. 1990). For example, a culture which values short-term profits over security – as apparent from payment and incentive schemes – will be more likely to lead to behavior which violates a security norm than a culture which values security over profits.

Atkinson et al. (2006) and Atkinson and Bench-Capon (2007) have adapted Walton’s argument scheme and added social values. In our work on designing security systems, we have in turn adapted Atkinson et al.’s argument scheme to the requirements engineering domain (Burgemeestre et al. 2013). This has resulted in the following argumentation scheme:
  1. (AS)

    In the current system S 1,

    we should implement system component C,

    resulting in a new system S 2 which meets requirements R,

    which will realize objective O

    and will promote value V.

     
An argument scheme like AS asserts an initial position. Opponents may then ask critical questions (CQ), trying to undermine the assumptions underlying the argumentation. Atkinson et al. (2007) provide an extensive list of critical questions, challenging the description S 1 of the facts of the case, the effectiveness of the choice of action C, or the legitimacy of the social value V. We have adapted this list of questions (Burgemeestre et al. 2013). In our exposition here we use a simplified version. Note that a further simplification would be possible if we would connect values to requirements directly, without the additional layer of objectives. However, in practice the additional layer is needed to deal with objectives, which are more general than system requirements but which are not values, for example, regulatory objectives.
  1. CQ1.

    Is the current system S 1 well described?

     
  2. CQ2.

    Will implementing component C result in a system which meets requirements R?

     
  3. CQ3.

    (a) Are requirements R sufficient to achieve objective O?

    (b) Are requirements R necessary to achieve O?

     
  4. CQ4.

    Does new system S 2 promote the value V?

     
  5. CQ5.

    Are there alternative systems S’ that meet requirements R and are there alternative sets of requirements R’ that achieve objective O and promote value V?

     
  6. CQ6.

    Does component C have a negative side-effect N, which demotes value V or demotes another value W?

     
  7. CQ7.

    Is implementing component C, meeting R, and achieving O feasible?

     
  8. CQ8.

    Is value V a justifiable value?

     

Critical question CQ1 is about the accuracy of the current system description. CQ2 is about effectiveness of the component; CQ3 is about effectiveness of the requirements specification, in meeting the objective. This involves two questions: adequacy refers to the question whether R is enough to achieve O. Necessity refers to the question whether no requirements can be left out. This is related to efficiency, in a way. One wants to achieve objective O with as little demands as possible. Note that there may be several ways of achieving the same objective. CQ4 considers whether the resulting system as a whole will promote the value. CQ5 considers alternative solutions. CQ6 considers possible negative side effects. Typically, the costs of an investment are listed here. CQ7 considers feasibility of the solution. In other words, are the assumptions on which the reasoning is based warranted? Finally, CQ8 considers the relative worth of the value itself.

Dependency Graphs

In addition to this dialogue technique with argumentation schemes and critical questions, we have also developed a graphical notation (Burgemeestre et al. 2011). The diagrams are called dependency graphs. They are based on AND/OR graphs and depict the dependency relationships from general objectives, requirements, and other system properties to specific system components (Fabian et al. 2010). They are similar to diagrams in the i* modeling approach (Yu 1997). We can settle trade-offs between objectives, requirements, and properties by linking objectives to social values and thus determining their relative priority.

Definition 1

(Dependency Graph) Given a set of system components C, properties P (including requirements R), objectives O, and values V, a dependency graph is defined as a directed acyclic graph 〈 N, A+, A– 〉, where nodes n 1, n 2, … can be any element of C, P, O, or V. There are two kinds of arcs. Positive arcs n 1n 2 mean that n 1 contributes to achieving n 2. Negative arcs n 1 n 2 mean that n 1 contributes to not achieving n 2.

For example, the graph in Fig. 3 may depict the contents of a dialogue about security measures. It is agreed among the participants that a system with a weak password policy contributes to the value usability , whereas a strong password policy promotes security. A strong password policy has the property of being hard to remember for users. This property in turn negatively affects the value usability. Depending on the situation, different choices will be made. In a school, usability probably outweighs security, but not in a bank (Fig. 4).
Fig. 4

Simple example of a dependency graph

By exchanging arguments, dialogue participants increase their shared knowledge. They learn things about the system and about each other. The dependency graphs can be understood as a means to capture the shared knowledge about the system at a certain point in the dialogue. For example, a designer may argue that the property of being hard to remember also negatively affects security , as it may lead to people writing their passwords down. Such a dialogue move would add a second negative arrow: hard to remember security.

Alternatives

In the field of requirements engineering, there are other approaches that also take nonfunctional requirements such as accountability, security, or privacy into account. In particular, there is the concept of soft goals: “subjective and unstructured or ill-defined qualities” (Bresciani et al. 2004; Yu 1997). Just like values, soft goals can be used to prioritize, evaluate, and compare other more specific requirements. However, these approaches do not explicitly deal with the fact that there may be different stakeholders with possibly opposite objectives, which need to be decided upon. In other words, they do not take the dialogue aspect seriously. A recent exception is presented by Prakken et al. (2013), who use argumentation to facilitate risk analysis in designing secure systems.

In the field of AI and law, there is a whole body of literature on the use of formal argumentation, for example, to capture legal reasoning about evidence; see, e.g., (Bex et al. 2003). Value-based argumentation also derives from that tradition (Atkinson et al. 2006). Part of the research on argumentation has branched off and developed relatively abstract mathematical models, called Abstract Dialectical Frameworks (Brewka et al. 2013). Now it turns out that these ADFs are very similar to the dependency graphs presented here. An ADF is essentially a graph, where nodes represent statements and links represent either positive (+) or negative (−) contributions to reaching a specific conclusion.

Experiences: Cooperation Between Regulators and Software Providers

Currently the quality of cash registers and point-of-sales (POS) systems that are available on the market is highly diverse. Enterprises can buy anything from second-hand cash registers to high-end POS systems that are integrated with back-office information systems. Especially for the lower market, segment competition is based on price rather than quality. Furthermore, cash register vendors feel pressured to satisfy customer requests to implement mechanisms that might enable fraud. A famous example is the testing mode, which allows managers to set the internal cash flow counter, at the heart of a point-of-sales system, back to zero. By contrast, point-of-sales systems that do have a reliable counter can be used as an independent source of evidence about the incoming cash flow (recall A1). For this reason, the testing mode is nicknamed the “Marbella button,” because it allows employees to divert revenue without being detected and book a nice holiday.

In this case study, we discuss the collaboration between the Dutch Tax and Customs Administration (DTCA) with vendors and developers of high-end POS systems to develop a quality mark. Such a mark will serve as a signal to differentiate high- from low-quality cash registers. In addition, a quality mark will increase awareness among businesses about the necessity for reliable point-of-sales systems. Note that corporate taxes are generally calculated as a percentage of the company’s revenues. VAT is calculated as a percentage of the sales. This explains the involvement of the tax administration. In the words of director-general Peter Veld: “This will make it easier for businesses to choose a cash register that will not hide revenue from the tax office.”1 For the quality mark to become a success, norms or standards are needed to differentiate reliably between high- and low-quality cash systems. A quality mark should only be given to systems that do not enable fraud.

DTCA and market participants have jointly developed a set of norms for systems to qualify for the quality mark, called “Keurmerk betrouwbare afrekensystemen” (quality mark reliable point-of-sales systems). The objective is that “a reliable POS system should be able to provide a correct, complete and continuously reliable understanding of all the transactions and actions performed with and registered by the system.” The norms are set up in a principle-based fashion, to make them applicable to a wide variety of POS systems on the market and to make them flexible enough to account for future developments. There are two versions, one for closed systems – these conform to all constraints of DTCA – and one for systems that can be closed, given the right settings. These systems are delivered with a delivery statement, which states that at the point of delivery, the system was configured in such a way that the system conformed to all constraints.

The norms are grouped into four main control objectives: (1) register all events, (2) preserve integrity of registrations, (3) secure storage of registrations, and (4) provide comprehensible and reliable reporting. Norms then prescribe specific requirements that a POS system must meet in order to fulfill the control objective. As an example, we will now discuss the first control objective in more detail (Table 1). The norms use a conceptual model of a sales transaction, consisting of the following phases: selection of the goods, showing financial data (e.g., price), formalization of the transaction, confirmation, and payment. Formalization is the phase during which all events must be recorded. Formalization starts at the latest when data of a financial nature are being shown. The overarching control objective is that all events performed with a POS system are registered. To be able to determine whether transactions are paid and registered in an accurate, complete, and timely manner, not only the transactions but also special actions like discounts, returns, canceled transactions, withdrawals, training sessions, and corrections should be identified and registered. To assure completeness of the audit trail, corrections should not delete or alter the original transaction data but rather reverse them by an additional credit booking.
Table 1

Example of a control objective and corresponding norms

Control objective 1: register all events

Nr

Norm

1

All events occurring on the POS system during the formalization phase are being registered

2

From the start of the formalization phase, data about transactions are being stored

3

Corrections are processed without altering the original transaction. Additional corrections must be traceable to the original transaction with an audit trail

Example Argumentation

A well-known fraud scenario is called “fake sales.” The cashier first scans or enters the product into the POS system and the transaction data is displayed. The customer pays the money to the cashier, takes the product, and leaves the shop. Instead of confirming the sales transaction (formalization) and registering the transaction data in the database, the cashier then cancels the transaction and steals the money. The following dialogue shows two participants of the working group, discussing controls to address the “fake sales” scenario.
A1

Suppose a POS system is used to show the customer an example offer. The transaction is canceled. The product is not sold. Should this event be registered by the system (CQ3a)?

B1

Even though the product is not delivered, it is a transaction because financial data is shown to the customer and all financial transactions should be registered.

A2

Currently the removal of an offer is registered as a “no sale.”2 All other details of the transaction are removed from the database.

B2

The “no sale” option enables fraud. When a customer does not request a receipt, the entrepreneur can choose the “no sale” option instead of confirming the sales transaction. The transaction data and revenues are not registered in the POS system database, and the entrepreneur can secretly collect the money (CQ6).

A3

But an honest entrepreneur will not abuse the “no sale” option to commit fraud.

B3

Scanning a product without registering the transaction can also form a risk for the entrepreneur. Employees can also steal money when details on sales transactions are not registered by the POS system (CQ6).

B4

To prevent abuse of the “no sale” option, one should keep a record of why the “no sale” action was used and by whom (CQ5).

A4

But writing a motivation for each canceled transaction disrupts the sales process (CQ7).

B5

Then at least metadata details on the “no sale” action should be registered in the audit trail. For example, the employee that performed the action, the time and date of the transaction, and the duration of the transaction (CQ5).

A5

Ok.

Example Dependency Diagram

In this chapter we define the values, objectives, system properties (including requirements), and system components used to capture the essence of the dialogue. We model the dependencies between these concepts in a dependency graph.

Values:
  • v 1: Completeness

  • v 2: Profitability

  • v 3: Accountability

  • v 4: Usability

Objectives:
  • o 1: Register all (trans)actions

  • o 2: Steal money

  • o 3: Efficient sales process

  • o 4: Detect fraud

Properties:
  • p 1: All events during the formalization phase of a transaction are registered.

  • p 2 : All actions are registered.

  • p 3: All corrections are registered.

  • p 4: Corrections do not alter transactions.

  • p 5: Register metadata about “no sale.”

Components:
  • c 1: “No sale”

  • c 2: Motivation

  • c 3: Metadata

We will now visualize the dependencies between these concepts as expressed in the dialogue in a series of dependency diagrams (Fig. 57).
Fig. 5

Dependency diagram “no sale”

By using the “no sale” option, the transaction is not confirmed and the actions preceding the transaction are not registered. Therefore, objective o 1 of registering all events is not achieved and completeness is not promoted. Furthermore, the “no sale” option is used with the intention to steal money, which has a negative effect on profitability and accountability (Fig. 6).
Fig. 6

Dependency diagram “no sale” with a motivation

When in addition some motivation must always be included that explains by whom and why the “no sale” option was used, we can say that all actions are in fact registered, o 1 is achieved, and completeness is promoted. However, having to include a motivation has a negative effect on the efficiency of the sales process, and therefore the usability of the system is demoted (Fig. 7).
Fig. 7

Dependency diagram “no sale” with metadata logging

In the end of the dialogue, registering metadata is proposed as an alternative compensating measure. In this situation, all actions are still not registered due to the “no sale” option, but instead metadata is registered that serves as an indicator to determine the completeness of transactions. By analyzing metadata, deviating patterns in the sales process can be found and fraud can often be detected. For example, some employees may have significantly more “no sales” activities. When such fraud can be detected, this feature has a positive influence on accountability and the completeness of transactions.

Lessons Learned

The case study shows that we can use value-based argumentation to make trade-offs between different values, objectives, and system components explicit. From a dialogue we derive the participants’ considerations on the design of reliable POS systems. By modeling the dependencies between these concepts, we can compare the advantages and disadvantages of a specific solution. As values are now explicitly linked to requirements and system components, we may conclude that our methodology does indeed provide a conceptual approach to take values into account in the design of information systems.

We also found that our approach has limitations, see also (Burgemeestre et al. 2011). The example of “fake sales” is rather simple. We can imagine that analyzing a more complex problem requires a lot more effort. The outcomes may be difficult to depict in dependency diagrams. Furthermore, human dialogues can be very unstructured, and therefore it might require considerable effort to capture them in terms of argumentation schemes and critical questions. Finally, as in all conceptual modeling, a lot depends on the modeling skills of the person making the diagrams. Little differences in the use of notation have a large effect.

Conclusions

An accountability relationship exists when one agent performs actions and feels obliged to provide evidence of his or her actions to the forum: some significant other, such as a regulator, the general public, or the principal on whose behalf the actions are performed. In this chapter we focus in particular on the application of such a setting to compliance monitoring. In this case, the agent is a company; the forum is a regulator. The accountability reports are required as part of a self-regulation or co-regulation scheme. Systems designed for accountability make sure that relevant information about the agent’s actions are recorded reliably and stored in a secure way, in such a way that all actions can be traced later. Provided that necessary precautions and guarantees are built into the way the information is being recorded and processed – internal controls – such information may then serve as evidence to justify decisions later.

Designing for accountability is therefore closely related to the question of what counts as evidence. But designing for accountability is more than implementing traceability. It also involves the organization of an effective opposition, either internal (independent risk function) or external (regulator; shareholders). This opposition must actively take on the role of a forum, use the audit trail to evaluate behavior, and confront agents with the consequences of their actions, either good or bad.

Designing for transparency amounts to designing a dissemination policy for the evidence, once it has been collected. Transparency can be restricted by illegitimate attempts to manipulate the dissemination process, for example, by causing information overload or by providing false information. Transparency can also be restricted by legitimate secrets. What is considered legitimate in a specific domain needs to be decided up front. Therefore, classification policies should be maintained that have disclosure as the default. For example, consider a system that would “automatically” reveal secrets after their expiry date.

As all design efforts, designing for accountability or transparency involves trade-offs. Social values like accountability or transparency may conflict with profitability, with safety and security or with mere tradition. In this chapter we have shown a dialogue technique for making such trade-offs explicit: value-based argumentation theory. The idea is that one stakeholder makes a proposal for a specific design, motivated by goals (design objectives) and underlying social values. The other stakeholders may then ask questions, challenge the underlying assumptions, or propose alternatives. The proposer is forced to defend the proposal, using arguments. Eventually, when the space of possible arguments is exhausted, the remaining proposal is best suited because it has survived critical scrutiny. The knowledge exchanged in the dialogue can be depicted in a dependency graph.

We have described a case study of the development of quality standards for point-of-sales systems, supported by the tax office. In the case study, we have demonstrated that designing for accountability is essentially a group effort, involving various stakeholders in opposite roles. Stakeholders challenge assumptions and provide clarification, thus increasing shared knowledge about the design and improving the motivation for specific design choices. We have also shown that value-based argumentation is a technique that can in fact be used to analyze the trade-offs in the case and provide fruitful insights. In addition, we have used dependency graphs to provide graphical summaries of the dialogue outcomes.

Although we derive these conclusions on the basis of a relatively specific example, we believe that they are generic. Values are intimately connected to the identity of people. Also other engineering disciplines that are struggling with dilemmas concerning safety, sustainability, or health could therefore benefit from such critical dialogues about values. Once again, this requires a critical forum that is not afraid to challenge assumptions and a transparent attitude that prefers making the motivation for design choices explicit. Under such conditions, the designers can be held accountable for their design.

Footnotes

  1. 1.

    http://www.keurmerkafrekensystemen.nl/, last accessed 6th of November 2014.

  2. 2.

    A “no sale” is an action on the POS system that has no financial consequences, for example, opening the cash register to change bills into coins.

References

  1. Aftergood S (2009) Reducing government secrecy: finding what works. Yale Law Policy Rev 27:399–416Google Scholar
  2. Alles M, Brennan G, Kogan A, Vasarhelyi M (2006) Continuous monitoring of business process controls: a pilot implementation of a continuous auditing system at Siemens. Int J Acc Inf Syst 7:137–161CrossRefGoogle Scholar
  3. American Accounting Association, C. o. B. A. C. C (1972) Report of the committee on basic auditing concepts. Acc Rev XLVII:14–74Google Scholar
  4. Atkinson K, Bench-Capon T (2007) Practical reasoning as presumptive argumentation using action based alternating transition systems. Artif Intell 171:855–874CrossRefGoogle Scholar
  5. Atkinson K, Bench-Capon T, McBurney P (2006) Computational representation of practical argument. Synthese 152(2):157–206CrossRefGoogle Scholar
  6. Ayres I, Braithwaite J (1992) Responsive regulation: transcending the deregulation debate. Oxford University Press, New YorkGoogle Scholar
  7. Bannister F, Connolly R (2011) Trust and transformational government: a proposed framework for research. Gov Inf Q 28(2):137–147. doi:10.1016/j.giq.2010.06.010CrossRefGoogle Scholar
  8. Bex FJ, Prakken H, Reed C, Walton DN (2003) Towards a formal account of reasoning about evidence: argumentation schemes and generalisations. Artif Intell Law 11:125–165CrossRefGoogle Scholar
  9. Black J (2002) Regulatory conversations. J Law Soc 29(1):163–196CrossRefGoogle Scholar
  10. Blokdijk JH, Drieënhuizen F, Wallage PH (1995) Reflections on auditing theory, a contribution from the Netherlands. Limperg Instituut, AmsterdamGoogle Scholar
  11. Bok S (1983) Secrets: on the ethics of concealment and revelation. Pantheon Books, New YorkGoogle Scholar
  12. Boritz JE (2005) IS practitioners’ views on core concepts of information integrity. Int J Acc Inf Syst 6(4):260–279CrossRefGoogle Scholar
  13. Bovens M (2007) Analysing and assessing accountability: a conceptual framework. Eu Law J 13(4):447–468CrossRefGoogle Scholar
  14. Bovens M et al (2005) Public accountability. In: Ferlie E (ed) The Oxford handbook of public management. Oxford University Press, Oxford, UKGoogle Scholar
  15. Breaux T, Anton A (2008) Analyzing regulatory rules for privacy and security requirements. IEEE Trans Softw Eng 34(1):5–20. doi:10.1109/tse.2007.70746CrossRefGoogle Scholar
  16. Bresciani P, Perini A, Giorgini P, Giunchiglia F, Mylopoulos J (2004) Tropos: an agent-oriented software development methodology. J Auto Agent Multi-Agent Sys 8:203–236CrossRefGoogle Scholar
  17. Breu R, Hafner M, Innerhofer-Oberperfler F, Wozak F (2008) Model-driven security engineering of service oriented systems. Paper presented at the information systems and e-Business Technologies (UNISCON’08)Google Scholar
  18. Brewka G, Strass H, Ellmauthaler S, Wallner JP, Woltran S (2013) Abstract dialectical frameworks revisited. Paper presented at the 23rd international joint conference on artificial intelligence (IJCAI 2013), BeingGoogle Scholar
  19. Burgemeestre B, Hulstijn J, Tan Y-H (2011) Value-based argumentation for justifying compliance. Artif Intell Law 19(2–3):149–186CrossRefGoogle Scholar
  20. Burgemeestre B, Hulstijn J, Tan Y-H (2013) Value-based argumentation for designing and auditing security measures. Ethics Inf Technol 15:153–171CrossRefGoogle Scholar
  21. Chopra AK, Singh MP (2014) The thing itself speaks: accountability as a foundation for requirements in sociotechnical systems. In: Amyot D, Antón AI, Breaux TD, Massey AK, Siena A (eds) IEEE 7th international workshop on requirements engineering and law (RELAW 2014), Karlskrona, pp 22–22Google Scholar
  22. COSO (1992) Internal control – integrated framework. Committee of Sponsoring Organizations of the Treadway Commission, New YorkGoogle Scholar
  23. COSO (2004) Enterprise risk management – integrated framework. Committee of Sponsoring Organizations of the Treadway Commission, New YorkGoogle Scholar
  24. Dando N, Swift T (2003) Transparency and assurance minding the credibility gap. J Bus Ethics 44(2):195–200. doi:10.1023/a:1023351816790CrossRefGoogle Scholar
  25. Day P, Klein R (1987) Accountabilities: five public services. Tavistock, LondonGoogle Scholar
  26. Debreceny R, Felden C, Ochocki B, Piechocki M (2009) XBRL for interactive data: engineering the information value chain. Springer, BerlinCrossRefGoogle Scholar
  27. Deming WE (1986) Out of the crisis. MIT Center for Advanced Engineering Study, CambridgeGoogle Scholar
  28. Dubnick MJ (2003) Accountability and ethics: reconsidering the relationships. Int J Org Theory Behav 6:405–441Google Scholar
  29. Dubois E, Mouratidis H (2010) Guest editorial: security requirements engineering: past, present and future. Requir Eng 15:1–5CrossRefGoogle Scholar
  30. Duff A (2007) Answering for crime: responsibility and liability in the criminal law. Hart Publishing, OxfordGoogle Scholar
  31. Eisenhardt KM (1985) Control: organizational and economic approaches. Manag Sci 31(2):134–149CrossRefGoogle Scholar
  32. Eisenhardt KM (1989) Agency theory: an assessment and review. Acad Manage Rev 14(1):57–74Google Scholar
  33. Elia J (2009) Transparency rights, technology, and trust. Ethics Inf Technol 11(2):145–153. doi:10.1007/s10676-009-9192-zCrossRefGoogle Scholar
  34. Eriksén S (2002) Designing for accountability. Paper presented at the NordiCHI 2002, second Nordic conference on human-computer interaction, tradition and transcendence, ÅrhusGoogle Scholar
  35. Fabian B, Gürses S, Heisel M, Santen T, Schmidt H (2010) A comparison of security requirements engineering methods. Requir Eng 15:7–40CrossRefGoogle Scholar
  36. Fleischmann KR, Wallace WA (2009) Ensuring transparency in computational modeling. Commun ACM 52(3):131–134. doi:10.1145/1467247.1467278CrossRefGoogle Scholar
  37. Flint D (1988) Philosophy and principles of auditing: an introduction. Macmillan, LondonGoogle Scholar
  38. Florini A (2007) Introduction. The battle over transparency. In: Florini A (ed) The right to know. Transparency for an open world. Columbia University Press, New YorkGoogle Scholar
  39. Friedman B, Peter H, Kahn J (1992) Human agency and responsible computing: implications for computer system design. J Syst Softw 17(1):7–14. doi:10.1016/0164-1212(92)90075-uCrossRefGoogle Scholar
  40. Friedman B, Kahn PH Jr, Borning A (2006) Value sensitive design and information systems. In: Zhang P, Galletta D (eds) Human-computer interaction in management information systems: applications, vol 6. M.E. Sharpe, New York, pp 348–372Google Scholar
  41. Grice HP (1975) Logic and conversation. Syntax Semant 3Google Scholar
  42. Hart HLA (1968) Punishment and responsibility: essays in the philosophy of law. Clarendon, OxfordGoogle Scholar
  43. Heckman C, Roetter A (1999) Designing government agents for constitutional compliance. Paper presented at the proceedings of the third annual conference on autonomous agents, SeattleGoogle Scholar
  44. Hess D (2007) Social reporting and new governance regulation: the prospects of achieving corporate accountability through transparency. Bus Ethics Q 17(3):453–476CrossRefGoogle Scholar
  45. Hofstede G, Neuijen B, Ohayv DD, Sanders G (1990) Measuring organizational cultures: a qualitative and quantitative study. Adm Sci Q 35(2):286–316CrossRefGoogle Scholar
  46. IIA (2013) The three lines of defense in effective risk management and control. IIA Position Papers, The Institute of Internal Auditors (IIA)Google Scholar
  47. Johnson DG, Mulvey JM (1995) Accountability and computer decision systems. Commun ACM 38(12):58–64. doi:10.1145/219663.219682CrossRefGoogle Scholar
  48. Knechel W, Salterio S, Ballou B (2007) Auditing: assurance and risk, 3rd edn. Thomson Learning, CincinattiGoogle Scholar
  49. Korobkin RB (2000) Behavioral analysis and legal form: rules vs. principles revisited. Oregon Law Rev 79(1):23–60Google Scholar
  50. Kuhn JR, Sutton SG (2010) Continuous auditing in ERP system environments: the current state and future directions. J Inf Syst 24(1):91–112Google Scholar
  51. Leite JC, Cappelli C (2010) Software transparency. Bus Inf Syst Eng 2(3):127–139. doi:10.1007/s12599-010-0102-zCrossRefGoogle Scholar
  52. Menéndez-Viso A (2009) Black and white transparency: contradictions of a moral metaphor. Ethics Inf Technol 11(2):155–162. doi:10.1007/s10676-009-9194-xCrossRefGoogle Scholar
  53. Merchant KA (1998) Modern management control systems, text & cases. Prentice Hall, Upper Saddle RiverGoogle Scholar
  54. Nissenbaum H (1994) Computing and accountability. Commun ACM 37(1):72–80. doi:10.1145/175222.175228CrossRefGoogle Scholar
  55. Norman DA (1998) The invisible computer. MIT Press, CambridgeGoogle Scholar
  56. Perelman C (1980) Justice law and argument. D. Reidel Publishing, DordrechtCrossRefGoogle Scholar
  57. Power M (1997) The audit society: rituals of verification. Oxford University Press, OxfordGoogle Scholar
  58. Power M (2007) Organized uncertainty: designing a world of risk management. Oxford University Press, OxfordGoogle Scholar
  59. Power M (2009) The risk management of nothing. Acc Organ Soc 34:849–855CrossRefGoogle Scholar
  60. Power M, Ashby S, Palermo T (2013) Risk culture in financial organisations. London School of Economics, LondonGoogle Scholar
  61. Prakken H, Ionita D, Wieringa R (2013) Risk assessment as an argumentation game. Paper presented at the 14th international workshop on computational logic in multi-agent systems (CLIMA XIV)Google Scholar
  62. Rees J (1988) Self Regulation: an effective alternative to direct regulation by OSHA? Pol Stud J 16(3):602–614CrossRefGoogle Scholar
  63. Romney MB, Steinbart PJ (2006) Accounting information systems, 10th edn. Prentice Hall, Upper Saddle RiverGoogle Scholar
  64. Santana A, Wood D (2009) Transparency and social responsibility issues for Wikipedia. Ethics Inf Technol 11(2):133–144. doi:10.1007/s10676-009-9193-yCrossRefGoogle Scholar
  65. Satava D, Caldwell C, Richards L (2006) Ethics and the auditing culture: rethinking the foundation of accounting and auditing. J Bus Ethics 64:271–284CrossRefGoogle Scholar
  66. Schneier B (2000) Secrets and lies: digital security in a networked world. Wiley, New YorkGoogle Scholar
  67. Searle JR (1995) The construction of social reality. The Free Press, New YorkGoogle Scholar
  68. Simon HA (1996) The sciences of the artificial, 3rd edn. MIT Press, Cambridge, MAGoogle Scholar
  69. Starreveld RW, de Mare B, Joels E (1994) Bestuurlijke Informatieverzorging (in Dutch), vol 1. Samsom, Alphen aan den RijnGoogle Scholar
  70. Suh B, Chi EH, Kittur A and Pendleton BA (2008) Lifting the veil: improving accountability and social transparency in Wikipedia with wikidashboard. Paper presented at the proceeding of the twenty-sixth annual SIGCHI conference on human factors in computing systems, FlorenceGoogle Scholar
  71. Turilli M, Floridi L (2009) The ethics of information transparency. Ethics Inf Technol 11(2):105–112. doi:10.1007/s10676-009-9187-9CrossRefGoogle Scholar
  72. Vaccaro A, Madsen P (2009) Corporate dynamic transparency: the new ICT-driven ethics? Ethics Inf Technol 11(2):113–122. doi:10.1007/s10676-009-9190-1CrossRefGoogle Scholar
  73. Van de Poel I (2011) The relation between forward-looking and backward-looking responsibility. In: Vincent N, Van de Poe I, Van den Hoven J (eds) Moral responsibility. Beyond free will and determinism. Springer, Berlin, pp 37–52Google Scholar
  74. Vishwanath T, Kaufmann D (2001) Toward transparency: new approaches and their application to financial markets. World Bank Res Obs 16(1):41–57CrossRefGoogle Scholar
  75. Walton D (1996) Argument schemes for presumptive reasoning. Lawrence Erlbaum, MahwahGoogle Scholar
  76. Weber RH (2008) Transparency and the governance of the internet. Comp Law Secur Rev 24(4):342–348. doi:10.1016/j.clsr.2008.05.003CrossRefGoogle Scholar
  77. Westerman P (2009) Legal or non-legal reasoning: the problems of arguing about goals. Argumentation 24:211–226CrossRefGoogle Scholar
  78. Yu E (1997) Towards modelling and reasoning support for early-phase requirements engineering. In: Proceedings of the 3rd IEEE international symposium on requirements engineering (RE’1997), IEEE CS Press, pp 226–235Google Scholar
  79. Zuiderwijk A, Janssen M (2014) Open data policies, their implementation and impact: a comparison framework. Gov Inf Q 31(1):17–29CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Delft University of TechnologyDelftThe Netherlands
  2. 2.PandarAmsterdamThe Netherlands

Personalised recommendations