41 Association of Research Libraries Research Library Issues 299 2019 Schoenenberger, director of product data and metadata management at Springer Nature, is clear that the intent of the project is “to initiate a public debate on the opportunities, implications and potential risks of machine-generated content in scholarly publishing.”32 Springer has gone to great lengths to document their process, discuss alternative strategies, identify weaknesses and outright failures, and to encourage critical commentary. In many ways they have provided an “explainability sandbox” for scholarly publishing. Determining the value of this and similar books will be achieved in part by interrogating the methods and processes by which they are constructed. In other words, the emerging AI books will need the capacity to explain themselves. Conclusion In his article about stewardship in the “age of algorithms,” Clifford Lynch argues that algorithmic accountability is “the domain of the regulator or social justice advocate, not the archivist.”33 However, he also notes that “this new world is strange and inhospitable to most traditional archival practice” and that “our thinking about a good deal of the digital world must shift from artifacts requiring mediation and curation, to experiences.”34 These observations suggest that the role of the archivist (and of research libraries more generally) should indeed include algorithmic accountability because of its centrality to emerging practices. The complexity and opacity of algorithmic decision-making, replete with limitations, outright failures, and dramatic advances, is challenging and changing our notions of information systems and their use. The field of explainable AI has emerged as a set of strategies, techniques, and processes used in a variety of contexts Research libraries have a unique and important opportunity to shape the development, deployment, and use of intelligent systems in a manner consistent with the values of scholarship and librarianship.
Previous Page Next Page