SPEC Kit 318: Impact Measures in Research Libraries · 11
analyzed (12 studies or 35%) or the analysis is in prog-
ress (6 studies or 18%). Data has not yet been analyzed
in half of the eight student success correlation studies
and three of the four research output assessment ac-
tivities. Instruction showed a better result: all but five
of the twenty-two respondents (77%) have analyzed
collected data. Of the 16 impact assessment studies
that have analyzed data, none reported to have found
a negative correlation 13 cited positive correlation
and three reported that the correlation was mixed or
inconclusive.
When asked whether the impact assessment re-
sults have influenced the library’s or parent institu-
tions’ decisions, the respondents reported a larger
effect on the library than on the parent institution.
Sixteen of the 23 responding institutions (70%) re-
ported that their results have influenced the library’s
decisions, ranging from library strategic planning to
space decisions. Four reported that such influence
reached their parent institution (17%), affecting bud-
get allocations, staffing decisions, and instruction or
curriculum change. (It is worth pointing out that all
four reported this influence in the instruction cat-
egory.) Two responded that the results had no impact
on the library’s decision making. Ten respondents,
however, report the influence of the study results on
either the library’s or the parent institution’s decisions
is “not yet decided.”
In response to the question on whether the results
were made available beyond the library, respondents
described 33 studies. In eight of the twenty library
instruction studies (40%) results were shared beyond
the library in another eight they were not. This could
have resulted in the fact mentioned above that in-
struction results have influenced more decisions on
the parent institution level than any other surveyed
area. In four of the five financial value studies (80%)
results were shared, making this the impact area that
had the highest sharing practice percentage-wise. This
could probably be explained, at least partially, by the
fact that data about value of ownership is usually
requested by risk management offices of the parent
institutions for insurance purposes. Such numbers
are usually produced by multiplying volumes held
by a standard per volume cost figure. As such, this
kind of data probably does not qualify as a real impact
measure since it is getting at replacement cost, rather
than impact of the content on users’ lives.
Although the survey included no questions spe-
cifically about obstacles to impact assessment, in their
comments respondents identified concern for patron
privacy issues and the difficulty of establishing mean-
ingful impact measures as major challenges.
Conclusions
Our first goal of this SPEC survey was to investigate
how much ARL libraries have ventured into assessing
their impact on users. Although the authors hoped to
see it half full, we cannot help but admit that the glass
of library impact investigations is almost empty. It is
encouraging to learn that those activities that took
place have been initiated by libraries that among the
surveyed areas, correlating instruction with measures
of student success is getting more established and
that some of the assessment results have influenced
decision making at the library or the parent institu-
tion level.
Yet, impact assessment is a field in its infancy for
research libraries. Absent institutional or regulatory
mandates, impact assessment activities might remain
at this level unless compelling success stories demon-
strate enough incentive for more libraries to venture
into this field.
Our second goal was to help spread best practices.
Unfortunately, the number of libraries that have con-
ducted impact assessment is very small, leaving us
feeling uncomfortable coining examples as best prac-
tices. Instead, we’d like to focus on the major issues we
see impeding the development of the field and offer
some suggestions that emerge from the comments
respondents made.
Paradoxically, the current hunger for demonstrat-
ing library impact might be slowing our libraries’
progress by creating too much pressure to produce
results that are compellingly supportive of our case.
Research libraries should consider and debate such
questions as: Is it a necessity for us to assess impact?
How can we freely investigate and experiment when
in a large part libraries depend on results that look
good? What happens if investigations do not demon-
strate positive correlations? Do we share the results
and with whom?
Previous Page Next Page

SPEC Kit 318: Impact Measures in Research Libraries (September 2010) resources

Extracted Text (may have errors)

SPEC Kit 318: Impact Measures in Research Libraries · 11
analyzed (12 studies or 35%) or the analysis is in prog-
ress (6 studies or 18%). Data has not yet been analyzed
in half of the eight student success correlation studies
and three of the four research output assessment ac-
tivities. Instruction showed a better result: all but five
of the twenty-two respondents (77%) have analyzed
collected data. Of the 16 impact assessment studies
that have analyzed data, none reported to have found
a negative correlation 13 cited positive correlation
and three reported that the correlation was mixed or
inconclusive.
When asked whether the impact assessment re-
sults have influenced the library’s or parent institu-
tions’ decisions, the respondents reported a larger
effect on the library than on the parent institution.
Sixteen of the 23 responding institutions (70%) re-
ported that their results have influenced the library’s
decisions, ranging from library strategic planning to
space decisions. Four reported that such influence
reached their parent institution (17%), affecting bud-
get allocations, staffing decisions, and instruction or
curriculum change. (It is worth pointing out that all
four reported this influence in the instruction cat-
egory.) Two responded that the results had no impact
on the library’s decision making. Ten respondents,
however, report the influence of the study results on
either the library’s or the parent institution’s decisions
is “not yet decided.”
In response to the question on whether the results
were made available beyond the library, respondents
described 33 studies. In eight of the twenty library
instruction studies (40%) results were shared beyond
the library in another eight they were not. This could
have resulted in the fact mentioned above that in-
struction results have influenced more decisions on
the parent institution level than any other surveyed
area. In four of the five financial value studies (80%)
results were shared, making this the impact area that
had the highest sharing practice percentage-wise. This
could probably be explained, at least partially, by the
fact that data about value of ownership is usually
requested by risk management offices of the parent
institutions for insurance purposes. Such numbers
are usually produced by multiplying volumes held
by a standard per volume cost figure. As such, this
kind of data probably does not qualify as a real impact
measure since it is getting at replacement cost, rather
than impact of the content on users’ lives.
Although the survey included no questions spe-
cifically about obstacles to impact assessment, in their
comments respondents identified concern for patron
privacy issues and the difficulty of establishing mean-
ingful impact measures as major challenges.
Conclusions
Our first goal of this SPEC survey was to investigate
how much ARL libraries have ventured into assessing
their impact on users. Although the authors hoped to
see it half full, we cannot help but admit that the glass
of library impact investigations is almost empty. It is
encouraging to learn that those activities that took
place have been initiated by libraries that among the
surveyed areas, correlating instruction with measures
of student success is getting more established and
that some of the assessment results have influenced
decision making at the library or the parent institu-
tion level.
Yet, impact assessment is a field in its infancy for
research libraries. Absent institutional or regulatory
mandates, impact assessment activities might remain
at this level unless compelling success stories demon-
strate enough incentive for more libraries to venture
into this field.
Our second goal was to help spread best practices.
Unfortunately, the number of libraries that have con-
ducted impact assessment is very small, leaving us
feeling uncomfortable coining examples as best prac-
tices. Instead, we’d like to focus on the major issues we
see impeding the development of the field and offer
some suggestions that emerge from the comments
respondents made.
Paradoxically, the current hunger for demonstrat-
ing library impact might be slowing our libraries’
progress by creating too much pressure to produce
results that are compellingly supportive of our case.
Research libraries should consider and debate such
questions as: Is it a necessity for us to assess impact?
How can we freely investigate and experiment when
in a large part libraries depend on results that look
good? What happens if investigations do not demon-
strate positive correlations? Do we share the results
and with whom?

Help

loading