SPEC Kit 346: Scholarly Output Assessment Activities · 59
Stronger relationship between output assessment and the funding, tenure, and promotion of faculty. The integrity
of data will come into question, especially when it comes to use (e.g., identifying “real” vs. robot web visits). Do the
metrics actually measure what we hope they do?
The area of altmetrics poses new challenges in research output evaluation as there is still little research to the meaning
of these metrics. It also provides exciting opportunities to capture impact of new forms of scholarly communication.
Libraries should keep a keen eye on the developments in this area.
The big publishing conglomerates are all trying to corner the market in this space. Libraries will need to be careful not to
get stuck in unhealthy relationships again, with closed standards, closed systems, and proprietary software and data. It
will be important to promote openness and competition, and for universities to have control over their own data.
The development of Altmetrics is something to watch, and will likely become more important and relevant in the next
five years.
The incomplete, but very interesting and easy, results provided by services like ResearchGate and Google Scholar Profile
are already influencing people to accept the quick, free, and incomplete data versus data from the commercial sector
like SciVal, InCites, etc.
The integration of more traditional scholarly output assessments (citation impact factor, h-index) with new methods of
assessment and with new partners on campus (institutional research, office of research)
The limitations of the h-index in the shifting scholarly communications landscape will most likely demand new skills and
training for library professionals to implement assessment for emerging forms of scholarship and impact.
The tracking of altmetrics will become much more prevalent.
There are so many new avenues of scholarly assessment that appear almost daily. At this point I think that it is too early
to understand the value of many of them.
There is a high cost to scholarly output assessment products such as ImpactStory, Plum Analytics, etc. Many universities
have Web of Science or Scopus but most campuses can’t afford both. At the campus level, which unit will be expected
to pay for products such as Plum Analytics, Digital Measures, InCites, etc.? Offices on campus often point to the library
to pay, but library budgets generally can’t absorb these costs. Scholarly output assessment measures are poised to
shift and additional measures be added to assessment but adoption and integration per discipline or department will
not occur all at once. Campus and discipline tenure and promotion processes will include new metrics but some will
be slower to adapt. Also libraries are being asked to double check commercial research impact products/results, which
is impossible since the commercial products use a proprietary methodology. Adoption and widespread use of ORCID
identifiers will help, but this will still take several years to ramp up.
There is no one-size-fits-all solution for scholarly output assessment. There is a need to think beyond the STEM
disciplines to the ways in which other disciplines, particularly in the humanities, can and should evaluate scholarly
output. There is also an increased need to account for alternative methods of scholarly output, such as conference
posters or the development of new technology or methods based on research.
Use of measures beyond citations in promotion and tenure decisions and departmental evaluations, including alt metrics
and institutional repository statistics. Also, defining what those measures mean qualitatively as well as quantitatively.
Vendors will develop tools that we have to evaluate and budget for. Faculty will use a variety of vendors and open
source software, creating a range of demands from different departments and disciplines. It will take time to develop
consensus on the most effective tools. Changes in publishing will impact how output is assessed (e.g., data publications
and article-level metrics).