24
Association of Research Libraries
Research Library Issues 290 2017
The evaluation can employ various formats and methodologies—
from satisfaction surveys, through measuring learning goals
achievement (or perception thereof ) at the end of library
instruction sessions, to anecdotal evidence, which can span the
spectrum from repeat customers to thank-you notes. Very often,
these are all conducted or received immediately following an
instruction session, which can impact the responses positively.
What happens if overall perceptions of helpfulness and value from
the two most important stakeholders of library instruction—faculty
and students—are collected long after a specific library instruction
session in the broader context of an overall assessment of the
library or the entire academic experience? What can we learn
from such data and how can we use what we learn to improve our
instructional offerings or rethink library instruction altogether?
And how can we reconcile data that seem contradictory?
Below we describe a Cornell University Library project—a case study
of triangulating from various data sources and using findings and
further investigation to create and assess the success of a pilot project
intended to improve the student experience, not just their skills.
Faculty See Student Need and Positive Impact of Library
Instruction
Cornell University Library conducted a locally designed census
survey of all its faculty in 2014 with an overall response rate of 46%
(48% among tenured and tenure-track faculty).6 The survey subjects
answered questions about a wide range of topics including their
perception of the information literacy skills of their students, their
use and the perceived impact of library instruction, and, for those who
don’t use library instruction, the reasons for forgoing this service.
Faculty are less than satisfied with the information skills of their
students. University-wide 33–39% of faculty said that fewer than
Previous Page Next Page