56  ·  Survey Results:  Survey Questions and Responses
Altmetrics for sure. But as there are more players (used to be the only citation database was Web of Science) it
gets harder and harder to choose the source data, no less the metrics used. The biggest problem yet to be solved
is combining results from different citation databases. This is because one not only needs to deduplicate the cited
references (the faculty member’s papers) but also the citing references. No good way to do the second part. Scholarly
output assessment is here to stay, it is a natural area for librarians since most of the assessment is based on citations/
mentions/downloads of published material, whether formal or informal. We know scholarly publishing.
Altmetrics that focus on non-scholarly attention to scholarly output will require libraries to turn their attention to things
like traditional and social media. Non-traditional scholarly output, such as data sets and code, will require new tools to
track citations and impact. Librarians will need to better understand the research process in order to help researchers
measure the impact of these outputs.
Arts & Humanities: Even though we think that they will benefit from Altmetrics, they want to use conventional metrics
for assessment (e.g., H-index) because that’s the only way they can stand on a level playing field with scientists. The
H-Index must be used for all faculty disciplines even though some disciplines may see problems with it. Librarians focus
on the problems of traditional metrics like H-index and JCR. But this does not help administrators use metrics better; it
only makes them annoyed (at us).
As North American universities adopt research information management and research assessment software, libraries
will be more involved in explaining what it means to faculty, and will be positioned to help faculty present their scholarly
outputs in the best light.
As scholarly output increasingly moves toward non-traditional platforms (e.g., blogs, social media), what are the
implications for collecting and preserving the scholarly record? What types of scholarly output will be prioritized among
research libraries? How might current methods and tools for assessing scholarly output reshape the scholarly record that
will be available through research libraries in the future?
Author disambiguation (ORCID, Researcher ID, etc.) and related metadata are only as useful as the data source you are
harvesting from is accurate, detailed, and accessible. Financial limitations and inaccurate data will continue to challenge
forward progress in this area unless libraries and publishers work together to improve the situation.
Cost of the tools, difficulty aggregating the data
Currently, popular service in the sciences but will become increasingly important in the humanities. Campus
administration’s increased interest in scholarly output assessment is something libraries need to be aware of and
respond to.
Data (and other digital scholarship “objects”) are a big issue. Not only the preservation of data but finding ways to
assess usage beyond citation metrics. There are groups examining this. Data citation is one method, but has yet to
become standard practice. This is likely to be messy for a while yet. In the last few years, we have suddenly started
seeing problems with researchers not understanding the difference between a “journal” and a series of publications
posted on a website. Electronic journals have caused confusion with what is a volume and issue number and why is it
needed ... along with being able to determine the “reputation” of a journal before submitting articles for publication.
There is a need to spend time educating researchers about predatory publishing and vanity presses. One of our librarians
reached out to a society publisher whose name was being “reused” by a predatory venue and it lead to the publisher
producing a three-part mini-series on the topic in their society newsletter.
Data sharing and digital scholarship/humanities result in scholarly output other than journal articles. Datasets are
published through repositories with digital object identifiers (DOIs) for ease of citation. Data citations should be counted
in scholarly output assessment, and new types of research output from digital scholarship/humanities projects should
be considered in addition to other forms of scholarly output.
Previous Page Next Page