3 SPEC Kit 352: Collection Assessment
Collection Assessment Process
All but two of the responding institutions indicated that they gather collections-related data above and
beyond what is required by the annual ARL, ACRL, and IPEDs statistics surveys, with over half doing
so on a regular basis, and nearly 40% on a project basis. The most common types of other data gathered
include usage and cost data for evaluating resources, and holdings, usage, and expenditures related to
subject-based collections. Other categories of data reported include Web analytics (e.g., Google Analytics,
EZproxy logs, search logs), analysis of interlibrary loan (ILL) requests to assess needs or gaps, usage
patterns (e-resources, circulation, digital repository usage), and citation analyses.
Assessment of library collections (versus the gathering and reporting of data) is also an integral
aspect of collection management. Nearly two-thirds of the respondents currently have a process (formal
or informal) for regularly assessing library collections, and another 30% are in the process of developing
one. Only three institutions reported no process for regular assessment or any intent to implement
one, primarily due to insufficient staff, or to lack of time, technical infrastructure, or perceived value.
Collection assessments and evaluations are conducted by most respondents annually and/or as needed.
A few respondents evaluate “continuously,” semi-annually, quarterly, or monthly. About half of the
respondents indicated no limit to the scope of their evaluations, another 20% limit the scope by format
and subject, and 13% limit their scope to selected formats. The respondents’ comments regarding the
scope of collection evaluations indicate that the evaluations are conducted at varying levels and only for
subscribed resources.
Responses concerning the formats and collections included in evaluations indicated that this
question was not clearly stated. Based on the comments, it is clear that the verbiage used for format
and collection were neither well defined nor differentiated. For the purposes of this analysis, therefore,
the responses to the questions concerning format and collection were merged. It was clear that all
respondents have included online resources and all but two have included print materials in their
evaluations. Additionally, close to two-thirds have included audiovisual materials or resources, and about
half included microform and other physical materials such as government documents, music scores, and
open access resources.
Serials and/or monographs—regardless of format—were evaluated by nearly all respondents,
followed by demand-driven acquisitions (DDA). Nearly half the respondents have evaluated their
government documents collections, while a third have evaluated their open access resources or their
archives. Interestingly, eleven respondents (16%) selected all of the options, indicating comprehensive
assessment. Conversely, six respondents had only evaluated journals and monographs, and four
respondents selected only one collection.
Locus of Data Collection and Analysis
At what levels do libraries collect and analyze the data?
An important goal in conducting this survey was to understand the extent of human resources devoted to
collection assessment. Of the 67 respondents who answered the admittedly complex series of questions
regarding locus of data collection and analysis responsibilities, most indicated that both data collection
and analysis are done at each and every library level: local, system, consortial, and shared collections.
However, as the levels broadened, the difference between the number of respondents who collected and
those who analyzed at that level increased. While most of those who analyzed data at the local library
level also collected that data, there were fewer who collected the data that was analyzed at the more
expansive levels.
Collection Assessment Process
All but two of the responding institutions indicated that they gather collections-related data above and
beyond what is required by the annual ARL, ACRL, and IPEDs statistics surveys, with over half doing
so on a regular basis, and nearly 40% on a project basis. The most common types of other data gathered
include usage and cost data for evaluating resources, and holdings, usage, and expenditures related to
subject-based collections. Other categories of data reported include Web analytics (e.g., Google Analytics,
EZproxy logs, search logs), analysis of interlibrary loan (ILL) requests to assess needs or gaps, usage
patterns (e-resources, circulation, digital repository usage), and citation analyses.
Assessment of library collections (versus the gathering and reporting of data) is also an integral
aspect of collection management. Nearly two-thirds of the respondents currently have a process (formal
or informal) for regularly assessing library collections, and another 30% are in the process of developing
one. Only three institutions reported no process for regular assessment or any intent to implement
one, primarily due to insufficient staff, or to lack of time, technical infrastructure, or perceived value.
Collection assessments and evaluations are conducted by most respondents annually and/or as needed.
A few respondents evaluate “continuously,” semi-annually, quarterly, or monthly. About half of the
respondents indicated no limit to the scope of their evaluations, another 20% limit the scope by format
and subject, and 13% limit their scope to selected formats. The respondents’ comments regarding the
scope of collection evaluations indicate that the evaluations are conducted at varying levels and only for
subscribed resources.
Responses concerning the formats and collections included in evaluations indicated that this
question was not clearly stated. Based on the comments, it is clear that the verbiage used for format
and collection were neither well defined nor differentiated. For the purposes of this analysis, therefore,
the responses to the questions concerning format and collection were merged. It was clear that all
respondents have included online resources and all but two have included print materials in their
evaluations. Additionally, close to two-thirds have included audiovisual materials or resources, and about
half included microform and other physical materials such as government documents, music scores, and
open access resources.
Serials and/or monographs—regardless of format—were evaluated by nearly all respondents,
followed by demand-driven acquisitions (DDA). Nearly half the respondents have evaluated their
government documents collections, while a third have evaluated their open access resources or their
archives. Interestingly, eleven respondents (16%) selected all of the options, indicating comprehensive
assessment. Conversely, six respondents had only evaluated journals and monographs, and four
respondents selected only one collection.
Locus of Data Collection and Analysis
At what levels do libraries collect and analyze the data?
An important goal in conducting this survey was to understand the extent of human resources devoted to
collection assessment. Of the 67 respondents who answered the admittedly complex series of questions
regarding locus of data collection and analysis responsibilities, most indicated that both data collection
and analysis are done at each and every library level: local, system, consortial, and shared collections.
However, as the levels broadened, the difference between the number of respondents who collected and
those who analyzed at that level increased. While most of those who analyzed data at the local library
level also collected that data, there were fewer who collected the data that was analyzed at the more
expansive levels.