Ethics-based auditing of AI is not a panacea. In fact, as a governance mechanism, it is subject to a range of conceptual, technical, economic, social, organisational, and institutional constraints. These are listed in Table 1.
Conceptual constraints are logical limitations which cannot be easily resolved but need to be continuously managed. How to prioritise amongst incompatible definitions of concepts like fairness and justice, for example, remains a fundamentally political question. Hence, one function of ethics-based auditing would be to arrive at resolutions that, even when imperfect, are at least publicly defensible.
Technical constraints are tied to the autonomous, complex, and scalable nature of AI. For example, meaningful quality assurance of AI-based systems is not always possible within test environments, due to their ability to update their internal decision-making logic over time. However, since these constraints are context-dependent, they are likely to be relaxed or transformed by future research.
Economic and social constraints depend on the incentives of different actors. Because ethics-based auditing imposes costs, financial and otherwise, care must be taken to not unduly burden particular sectors or groups in society. At the same time, effective governance cannot afford to be naïve. Even in cases where audits reveal flaws in AI-based systems, asymmetries of power may prevent corrective steps from being taken.
Organisational constraints concern the design of operational auditing frameworks. Ethics-based auditing is only as good as the institutions backing it. Currently, a clear institutional structure is lacking. Moreover, the effectiveness of ethics-based auditing remains constrained by a tension between national jurisdictions, on the one hand, and the global nature of technology, on the other.