35 Association of Research Libraries Research Library Issues 299 — 2019 Authorizations Authorizations as explanations are processes, typically involving third parties, which provide an assessment or ratification of the AI. Authorizations might pertain to the AI model, its operation in specific instances, or even the process by which the AI system was created. Examples of authorization include transparency, expertise, due process, litigation, and liability. This section will look at voluntary codes, audit, legislation, and regulation. Voluntary codes or standards that encourage explanatory capabilities are approaches to explanation supported by the AI industry and professional organizations (for example, Association for Computing Machinery and IEEE). Self-regulation through non-binding codes or standards is a type of governance that some argue is the most effective for rapidly changing technologies. The inflexibility of legislation and regulation might either unnecessarily constrain AI or be ineffective in managing new developments. The “privacy by design” initiative might be a model for something like “explanation by design” whereby prior impact assessment reports, certification requirements, and codes of conduct would provide incentives for more “scrutable algorithms.” Unfortunately, this strategy is undercut by the poor experience with voluntary mechanisms regarding privacy protection. A commonly recommended approach to AI explanation is third-party auditing. The use of audits or audit principles is widely accepted in a variety of areas. While auditing is typically ex post, it can be accomplished at any stage, including design specifications, completed code, operating models, or periodic audits of specific decisions. Auditing for XAI would require trusted auditors, an accepted set of standards to measure against, and the “auditability” of the algorithms or systems. Critics of the audit approach have focused on lack of auditor expertise, algorithmic complexity, and the need for ex ante approaches.