8 Survey Results: Executive Summary
Only fourteen respondents reported other methods, many of which were variations on those
listed in the survey questions. For example, respondents rank journals by usage, by faculty perception,
and by global citation analysis. Almost all of the projects described include some aspect of usage, many
focusing on format such as e-books, journals, or print monographs. It is clear that usage has become the
prominent, if not the most important, measure for collection assessment.
The commercial collection analysis tool used by the most respondents (currently, previously, or
interested in using) is the YBP Gobi Peer Groups, followed closely by the OCLC Collection Evaluation/
Analysis System and the ProQuest Intota Assessment. The Bowker Book Analysis had the most “never
used” responses. GreenGlass (aka Sustainable Collections Service from OCLC) was the most commonly
mentioned other tool that respondents are currently using. Other systems include data management &
visualization tools (e.g., Cognos and Tableau), usage data management, overlap analysis tools, Ulrich’s
Serials Analysis, and UStat.
Interestingly, only eight respondents (13%) have used freely available data, most notably ARL and
IPEDS statistics. Other data sets mentioned include CUFTS (for database overlap), the Scopus Journal
Metrics (Source Normalized Impact per Page (SNIP), Impact per Publication (IPP), and SCImago Journal
Rank (SJR)), and the WorldCat Expert Search feature to compare holdings.
Collection Assessment Results Dissemination
Audience for and Format of Reports
We were interested in learning how libraries disseminate their collection assessment results, both the
formats and the audiences, essentially, “who gets what.” Not surprisingly, those internal to the library are
the most common recipients of information (over 90%), with library administration, collection managers,
and subject specialists receiving slightly more responses than other library staff. There is a notable
drop in the number of responses for the next cluster, institutional administration or oversight (roughly
70–80%), while about half make their information available to the general public. Only a few respondents
reported other audiences, and these tended to be funders and alumni.
Print or PDF reports and in-person presentations are the most commonly used formats for
sharing data (60 respondents each or 92%) across all constituent categories. Many respondents (51 or
79%) disseminate these files through the library intranet (primarily to library staff )and 32 (49%) use
the public website (for a broader audience). By far, the institutional repository is the least used mode for
disseminating collection assessment results only five respondents selected this option. While almost all
of the respondents share assessment data through written reports and presentations/slide-shows that
include charts and graphs, only 29 respondents reported using interactive visualizations/dashboards to
represent their findings.
Another purpose of this survey was to determine the accessibility of the summary or raw data
gathered for collection assessment purposes. The goal was to determine the data sharing environment
of the ARL respondents 63 responded to questions pertaining to summary data and 58 responded
to questions pertaining to unprepared/raw data. Most of the respondents (41 or 65%) indicated that
stakeholders have either direct access or access upon request to summary collections evaluation and
assessment data. Another 18 (29%) provide more limited access to the summary data, and only three
indicated that most summary data is not accessible at all. Twenty-two respondents (38%) reported that
most raw data is accessible upon request and an equal number reported that some data is accessible.
Eleven (19%) indicated that raw data is not accessible at all.
Collection Assessment Outcomes
We were very interested in learning the outcomes of collection assessments, as well as what collection
assessment challenges libraries face. The top two results of collection assessment have been an
Only fourteen respondents reported other methods, many of which were variations on those
listed in the survey questions. For example, respondents rank journals by usage, by faculty perception,
and by global citation analysis. Almost all of the projects described include some aspect of usage, many
focusing on format such as e-books, journals, or print monographs. It is clear that usage has become the
prominent, if not the most important, measure for collection assessment.
The commercial collection analysis tool used by the most respondents (currently, previously, or
interested in using) is the YBP Gobi Peer Groups, followed closely by the OCLC Collection Evaluation/
Analysis System and the ProQuest Intota Assessment. The Bowker Book Analysis had the most “never
used” responses. GreenGlass (aka Sustainable Collections Service from OCLC) was the most commonly
mentioned other tool that respondents are currently using. Other systems include data management &
visualization tools (e.g., Cognos and Tableau), usage data management, overlap analysis tools, Ulrich’s
Serials Analysis, and UStat.
Interestingly, only eight respondents (13%) have used freely available data, most notably ARL and
IPEDS statistics. Other data sets mentioned include CUFTS (for database overlap), the Scopus Journal
Metrics (Source Normalized Impact per Page (SNIP), Impact per Publication (IPP), and SCImago Journal
Rank (SJR)), and the WorldCat Expert Search feature to compare holdings.
Collection Assessment Results Dissemination
Audience for and Format of Reports
We were interested in learning how libraries disseminate their collection assessment results, both the
formats and the audiences, essentially, “who gets what.” Not surprisingly, those internal to the library are
the most common recipients of information (over 90%), with library administration, collection managers,
and subject specialists receiving slightly more responses than other library staff. There is a notable
drop in the number of responses for the next cluster, institutional administration or oversight (roughly
70–80%), while about half make their information available to the general public. Only a few respondents
reported other audiences, and these tended to be funders and alumni.
Print or PDF reports and in-person presentations are the most commonly used formats for
sharing data (60 respondents each or 92%) across all constituent categories. Many respondents (51 or
79%) disseminate these files through the library intranet (primarily to library staff )and 32 (49%) use
the public website (for a broader audience). By far, the institutional repository is the least used mode for
disseminating collection assessment results only five respondents selected this option. While almost all
of the respondents share assessment data through written reports and presentations/slide-shows that
include charts and graphs, only 29 respondents reported using interactive visualizations/dashboards to
represent their findings.
Another purpose of this survey was to determine the accessibility of the summary or raw data
gathered for collection assessment purposes. The goal was to determine the data sharing environment
of the ARL respondents 63 responded to questions pertaining to summary data and 58 responded
to questions pertaining to unprepared/raw data. Most of the respondents (41 or 65%) indicated that
stakeholders have either direct access or access upon request to summary collections evaluation and
assessment data. Another 18 (29%) provide more limited access to the summary data, and only three
indicated that most summary data is not accessible at all. Twenty-two respondents (38%) reported that
most raw data is accessible upon request and an equal number reported that some data is accessible.
Eleven (19%) indicated that raw data is not accessible at all.
Collection Assessment Outcomes
We were very interested in learning the outcomes of collection assessments, as well as what collection
assessment challenges libraries face. The top two results of collection assessment have been an