31 Association of Research Libraries Research Library Issues 299 2019 Opacity and Trust Why do we need an explanation for how AI works? Geoffrey Hinton, often referred to as the godfather of deep learning and neural networks, observes, “A deep-learning system doesn’t have any explanatory power.…the more powerful the deep-learning system becomes, the more opaque it can become.”8 Despite this, Hinton has been critical of requirements that AI should explain itself and insists performance should be the key measure of trust. After all, humans can’t provide explanations for many of their actions or decisions, why expect AI to do otherwise? While Hinton may discount the importance of, or even the need for, an explanation, psychologists and cognitive scientists do not. Explanations are “more than a human preoccupation—they are central to our sense of understanding, and the currency in which we exchange beliefs.”9 There is an extensive literature on both the power and the failings of AI. Examples of discrimination and unfairness are matched by extraordinary advances and success. However, it is exactly for these reasons that the opacity, complexity, and consequential nature of AI drives the need for trust and elevates explanation as a key antidote. What is a good or satisfactory explanation? For whom is the explanation provided, in what context, with what, if any, evidence, and presented in what manner? An explanation should be able to address “how” (inputs, output, process), “why” (justification, motivation), “what” (awareness that an algorithmic decision-making system exists), and the “objective” (design, maintenance).”10 In the context of opaque systems, an explanation should be: As academic libraries increasingly acquire and develop algorithmic decision-making systems and services in support of scholarly communications and the operation of the library, they must do so in a manner that insists on interpretability and explanation.
Previous Page Next Page