10 · Survey Results: Executive Summary
areas, though 13 others (24%) are planning to conduct
studies. The remaining 23 respondents (42%) report
their library has not and has no plans to study impact
measures.
Relatively speaking, library instruction is the area
that has seen the most impact assessment activities,
probably due to the increased emphasis on assessing
learning outcomes in higher education, as well as
well-established course-evaluation practices at uni-
versities. Still, only 15 respondents (27%) have stud-
ied this area and 12 others (22%) have plans to. That
means half of the responding institutions have not
measured and have no plans to measure whether
participation in library instruction, one of the flagship
services of academic libraries, increases the attendees’
information literacy skills and success in their work
or career. Among the assessment activities that are
occurring in the instruction area, most focus on im-
mediate results of instruction, such as feedback on
instruction and quality of bibliographies in attendees’
assignments, with overall GPA hardly ever used for
correlation studies and post-graduation impact not
investigated at all.
Only a handful of respondents reported any im-
pact measurement activities in the other areas covered
by the survey. Each of the other areas has been stud-
ied by between one and five libraries between three
and nine other libraries plan to conduct studies in the
next 12 months. The vast majority of survey respon-
dents has not measured and has no plans to measure
possible correlations between library use and student
success, library use and research output, the library’s
financial value, or any other measures. The number of
studies in each category could be even lower because
it appears that some of the studies might not legiti-
mately belong in the impact categories under which
they were reported. (The responses were too brief for
the authors to better categorize them with confidence.)
To gauge whether the impact assessment activity
is a project or program, the survey asked if the study
was one-time or ongoing. Only half of the responses
across all five study categories indicate that the im-
pact investigations discussed are ongoing. A full 13%
of the activities were clearly reported as one-time
projects. Instruction again appears to be the most
established area: two-thirds of the reported impact
studies were identified as ongoing. In contrast, studies
of research output have the highest reported percent-
age of being one-time projects (50%). It is worth noting
that more than a third of the respondents are unsure
about whether their libraries’ assessment activity is
intended to reoccur or not, of which financial value
calculations ranked the top: eight out of fourteen, or
57%, indicated that they do not know whether that
investigation will be ongoing or one-time. It is hard
to judge whether this is indicative of the uncertainty
about the value or perceived value of such studies, or
it is due to the difficulty of obtaining such findings.
Similar to the findings of SPEC Kit 303, this study
also revealed that libraries tend to initiate impact as-
sessment activities. Library administration is by far
the most-often cited instigator of impact studies. It
is unclear how much of this is in response to exter-
nal pressures. It is interesting to observe that in the
library instruction category, “other entity,” which
includes librarians, faculty, and library or campus
departments, is a very close second instigator of the
reported investigations.
An examination of the methods libraries reported
using to collect data reveals that online and paper sur-
veys rule the landscape, and are the most often used
assessment methods for instruction and research out-
put. The majority of the surveys are designed by the
library itself.
Instruction assessment studies most often collect
data through direct methods such as evaluation of stu-
dent assignments and observation of student behavior
and indirect methods such as collecting student and
faculty feedback. A handful of respondents men-
tioned that they use standardized tests such as SAILS
(Standardized Assessment of Information Literacy
Skills) and CLA (Collegiate Learning Assessment) for
measuring information literacy skills. When measur-
ing student success, respondents most often reported
analyzing institutionally collected data (5 of 11, or
45%). Only three correlation studies on research out-
put were reported. They used a mixture of qualitative
and quantitative methods to collect data.
The survey asked the libraries that collected data
whether they had also analyzed the data. According
to responses about 34 impact studies, a significant
percentage of the collected data either has not been
Previous Page Next Page