Tag Archives: gocsick

Reddit’s Ask Me Anything with Stacy Konkiel about research metrics today (May 10, 2016)

You have a chance to ask your most pressing questions about research metrics today, May 10, 2016, by 10 am PDT. Here’s more about the panelist for this discussion, Stacy Konkiel, from her Reddit AMA (Ask Me Anything) introduction,

Hi, I am Stacy Konkiel, Outreach & Engagement Manager at Altmetric, and I’m here to talk about whether the metrics and indicators we like to rely upon in science (impact factor, altmetrics, citation counts, etc) to understand “broader impact” and “intellectual merit” are actually measuring what we purport they measure.

I’m not sure they do. Instead, I think that right now we’re just using rough proxies to understand influence and attention, and that we’re in danger of abusing the metrics that are supposed to save us all–altmetrics–just like science has done with the journal impact factor.

But altmetrics and other research metrics don’t have to be Taylorist tools of control. I love the promise they hold for scientists who want to truly understand how their research is truly changing the world.

I especially appreciate the fact that newer metrics allow the “invisible work” that’s being done in science (the data curators, the software developers, etc) can be recognized on its standalone merits, rather than as a byproduct of the publication process. That’s been my favorite part of working for Altmetric and, previously, Impactstory–that I can help others to better value the work of grad students, librarians, data scientists, etc.

Today, I want to talk about better measuring research impact, but I’m also open to taking other relevant questions. There will also be some live tweeting from @Altmetric and @digitalsci and questions using the #askstacyaltmetric hashtag.

My favourite question so far is this (it’s a long one),

gocsick 1 point

I might be pessimistic, but I am not sure there will ever be a satisfactory metric or indicator for the simple reason that it is impossible to normalize across fields or even within sub-disciplines within a field. Take my department of materials science for example, we recently hired two assistant professors one in the area of materials chemistry for batteries and energy storage and the other in welding metallurgy. The top candidates for the energy storage position had multiple science and nature publications with h-indices of ~40. The welding candidates had h-indices ~6. The interview process for both positions was interlaced, and in general we felt that the welding candidates were better in terms of depth and breadth of their knowledge and ability to do high quality science. Conversely, while the energy candidates had great numbers and publications seemed to be a bit of a one-trick pony in terms of ability to contribute outside of their very narrow speciality.

My point is that any metric is going to underestimate the contribution of people working in science areas outside of the current sexy topics. Within the field, we all know what journals are well respected and only publish high quality science. For example the Welding Journal and Corrosion are both the premier publications in their fields but each have an impact factor of <2. This is a reflection of the number of people and publications in the field rather than the quality of science.

I suppose I should end the rant and ask some questions.

1) Is the situation I described really a problem or am i just being grumpy? After all the metrics are supposed to measure the impact of the work, and people who work in less popular fields are less likely to make a big “impact”.

2) Is it possible to develop a metric on “quality” rather than impact. If so what would such a metric look like?

3) Altmetrics will likely have the same failings. There are many ways someone’s work can have broader impact that will not necessarily be talked about on social media or popular science blog posts. For example research leading to $$$ saved in industry by making a small change to a manufacturing operation, could have a huge broader impact but never once be tweeted (or cited more than a handful of times for that matter). Is there any potential to develop metrics to help shed light on these other “blind” activities?

Thanks

I hope you get there in time, if not, perhaps someone else has asked the question for you. I look forward to seeing the answers.