47 SPEC Kit 352: Collection Assessment
Global citations: Academic Analytics and Web of Science (when needed). Peer comparisons: every
3–6 years using OCLC data. Accreditation: only when needed. Direct evaluation: project specific and
when needed.
In all cases this varies from annually to as requested, often in connection with collection
management projects.
In most cases another interval is done upon request/ad hoc. Accreditation is done per
accreditation cycles.
Individual collection managers conduct these evaluations on an as-needed basis. The Libraries does not
currently conduct these evaluations at an institution-wide level.
Irregularly, as needed
Many of these processes occur on an ad hoc/as-needed basis.
Mostly as needed. Some of the above, such as accreditation guidelines are conducted on regular
intervals, but others are conducted on an ad hoc basis.
On occasion
Ongoing, as needed
Other intervals refer to portions of the collection, on a project basis.
Peer library: as needed. Direct evaluation: multiple annually
RLG Conspectus was completed several times years ago. Accreditation guidelines have been used for
individual school accreditations.
Used on ad hoc basis for specific projects.
Varies, as needed
Various selectors have used global citations analysis, list checking, and visual evaluation in various
projects at time of need. We do accreditation evaluations as the colleges/units need them.
We use impact factor to make selection and renewal decisions, but have not done a global review
based on impact factor. We’ve done small-scale analysis with peer comparison and department level
accreditation reviews.
We were an active participant in the RLG Conspectus in the 1980s and 90s. We have used LibQUAL+
surveys in the past, which include comparisons to peer libraries. Collection analysis in connection with
our 2CUL partnership with Columbia and other collaborative collection development initiatives have
included comparisons of holdings, but these have been undertaken on a project basis. Visual evaluation
has played a role in decision-making around remote storage—this has also been mainly project-based.
I am not aware of systematic recent use of list checking or brief tests, although individual selectors
may use variations on these methods from time to time. Impact factor and similar measures are used,
in some cases, in cancellation decisions, but we do not systematically track citations for collection
development purposes.
When relevant
Where selected, “another interval” should be read as “as and when required.”
Global citations: Academic Analytics and Web of Science (when needed). Peer comparisons: every
3–6 years using OCLC data. Accreditation: only when needed. Direct evaluation: project specific and
when needed.
In all cases this varies from annually to as requested, often in connection with collection
management projects.
In most cases another interval is done upon request/ad hoc. Accreditation is done per
accreditation cycles.
Individual collection managers conduct these evaluations on an as-needed basis. The Libraries does not
currently conduct these evaluations at an institution-wide level.
Irregularly, as needed
Many of these processes occur on an ad hoc/as-needed basis.
Mostly as needed. Some of the above, such as accreditation guidelines are conducted on regular
intervals, but others are conducted on an ad hoc basis.
On occasion
Ongoing, as needed
Other intervals refer to portions of the collection, on a project basis.
Peer library: as needed. Direct evaluation: multiple annually
RLG Conspectus was completed several times years ago. Accreditation guidelines have been used for
individual school accreditations.
Used on ad hoc basis for specific projects.
Varies, as needed
Various selectors have used global citations analysis, list checking, and visual evaluation in various
projects at time of need. We do accreditation evaluations as the colleges/units need them.
We use impact factor to make selection and renewal decisions, but have not done a global review
based on impact factor. We’ve done small-scale analysis with peer comparison and department level
accreditation reviews.
We were an active participant in the RLG Conspectus in the 1980s and 90s. We have used LibQUAL+
surveys in the past, which include comparisons to peer libraries. Collection analysis in connection with
our 2CUL partnership with Columbia and other collaborative collection development initiatives have
included comparisons of holdings, but these have been undertaken on a project basis. Visual evaluation
has played a role in decision-making around remote storage—this has also been mainly project-based.
I am not aware of systematic recent use of list checking or brief tests, although individual selectors
may use variations on these methods from time to time. Impact factor and similar measures are used,
in some cases, in cancellation decisions, but we do not systematically track citations for collection
development purposes.
When relevant
Where selected, “another interval” should be read as “as and when required.”