The following is a written conversation between Michael Cox and Mike Schoon.

Michael: When I was a kid, I played Dungeons and Dragons (D&D) with my older brothers, a role playing game in which you pretend that you are someone else in a very different kind of world. Sometimes what I liked most about this game was looking over my “character sheet”, a piece of paper that shows how strong and wise and smart your wizard or fighter or thief is. I would look at these numbers, each recorded in its own box, and think about how they might increase when my character gained a level. Much of my investment in the game was based on this numerical progress, and my strong personal identification with this avatar. As an adult and an academic, I wonder about the extent to which I have the same relationship with, say, my google scholar or researchgate pages as I used to have with that piece of paper. They also have numbers that increase, and are a representation of my identity to which I have become more attached to than I now like.

Mike:
I think that many of us can relate to this issue. Whether we are playing D&D, obsessing over sports statistics, checking out the latest rankings of universities, studying the polls of politicians, or comparing economic indicators across countries, we often find ourselves drawn to “simple” metrics and measurements of complex phenomena.

For our current conversation, I’m struck by how simple metrics like h-indices and citation rates influence so much of academic behavior. Our collective enterprise has, in many ways, devolved into an academic arms race and the resultant “publication inflation”. This results in ever-increasing standards about what is expected. We see post-docs, scrambling to make it into a tenure-track position, publishing far more than tenured faculty. We see junior faculty cranking out publications for the same reason. Academics are stuck on a treadmill due to positive (meaning ever-increasing, not good and definitely not beneficial) feedback loops. It reminds me of a quote by Nobel Laureate Simon Kuznets after creating a precursor to GDP (national income) as a measurement of a national economy:

 “The welfare of a nation can scarcely be inferred from a measurement of national income.” -Simon Kuznets

Michael:  A self-reinforcing arms race sounds right. We have a dominant narrative that science is a public good, and broadly this means that the more of it we make, the better. I have seen a depiction of a public good function with diminishing marginal returns (Ostrom 2007), but I don’t know how commonly we entertain one with negative returns. But that’s what I think is happening. Specifically, overproduction dilutes the pool of research, increasing the noise that has to be sifted through to find the signal of good work. I think the real public good here is high quality, accessible knowledge, and we don’t worry as much about this at the individual level, arguably decreasing it by advancing our own metrics.

And this is a self-fulfilling collective-action problem. It’s as if we were a group of farmers overproducing crops, lowering the price of the agricultural commodities they produce. It’s in the interest of each actor to produce more, but if we all do, the value (of commodities or ideas) declines in the marketplace, and we have to compensate at the individual level by ramping up production even further. In academia we have the additional incentive that production level is seen as a linear signal of quality.

Put another way: no one has enough time to read each other’s work, because we are all publishing so much, and we are all publishing so much because we need to in order to maintain our reputations via simplified metrics, because others don’t have enough time to read our work, because they have to publish so much. And so on.

I also think that this process exacerbates the “Matthew effect” of highly cited scientists continuing to collect more citations. The variation in prestige associated with numerical metrics is highly visible now, allowing prestige bias to run rampant; it makes it easier to locate individuals with highly visible markers of prestige, and to further reward them by citing them more. This decreases diversity of the knowledge pool as well.

Finally, I’ll add that I think this situation is a bit ironic for our field, given how strongly we embrace criticisms of myopic approaches to management and governance (Holling and Meffe 1996). In this literature there is a term for the problem here: Goodhart’s Law (Muller 2018), stating that, once an output is measured, actors respond strategically to undermine the value of the metric.

Mike: It’s interesting that you bring up the commodity overproduction example as I saw that the price of oil from American frackers (west Texas intermediate to be specific) briefly went negative yesterday (April 20). There is no demand due to covid-19 and no extra storage capacity. Suppliers are paying others to take it away!

I mentioned earlier that this affects all of us individually and really hits early career folks hard as they move beyond their doctoral programs. At these early stages, most don’t yet have feedback loops operational (reputation, network of other scholars, etc) that increase their citations and other reputational signals that increase the likely uptake of new research.

But these problems are not just at the individual level. They compound as we examine this across scales. Schools need to satisfy higher university administration by attracting attention to their research (citations), by attracting money (grants) and by attracting more students (reputation). And this reputation is built, in part, through college rankings, which also uses the same relatively simple metrics (grants, citation rates, etc). As a result, decisions on career advancement (getting tenure, getting a raise) are conditioned by and reinforce the same simple measurements.

Michael: Agreed that this is a systemic problem. As a partial response, I think that folks who have crossed the “reputation threshold” should set norms for behavior by avoiding overpublishing and focusing on those outputs that they think are most socially valuable. We need to change our norms such that overpublishing isn’t seen as much as a virtue as it is a vice, or at the very least, a complicated mixture of the two. As a final thought, I would suggest that folks read a great book that reflects on these ideas, called the “Slow Professor” (Berg and Seeber 2016).

Mike: Some of my colleagues have increasingly been looking for alternative outputs such as webinars (see the Programme on Ecosystem Change and Society webinar series that I’ve been hosting), hosting virtual meetings, developing popular science outputs, etc. Some colleagues (led by Marco Janssen) within the commons community have started moving strongly in this direction with webinars, virtual conferences, virtual summer and winter schools, etc. Then there is the Finding Sustainability Podcast that you co-host.

It takes more coordination, but these activities provide opportunities for learning and scientific advancement outside of (and oftentimes superior to) traditional publications. As we scale up individual efforts within our departments and schools, we can also begin to change the structural barriers keeping us in such a rigidity trap. I recently saw a posting on Michigan State University’s Cultivating Pathways to Intellectual Leadership (CPIL) framework, which attempts “to disrupt an impoverished understanding of scholarship limited to a restrictive range of outputs that reinforce exclusionary structures of power and regressive modes of production” (“Staying with the Trouble: Designing a Values-Enacted Academy” 2020). Here, we see an effort to scale up individual actions to a departmental or school level of change.

Ultimately, we need to remember that our obligation to scientific advancement isn’t fulfilled by quantity of pages written but by quality of output – by an attempt to improve our understanding of the world and in turn make it a better place. As one of my doctoral advisors would say “Some academics slice the bologna very thinly.” It’s time to improve our scientific output.

References

Berg, Maggie, and Barbara K. Seeber. 2016. The Slow Professor: Challenging the Culture of Speed in the Academy. University of Toronto Press.

Holling, C. S., and Gary K. Meffe. 1996. Command and Control and the Pathology of Natural Resource Management. Conservation Biology 10 (2).

Muller, Jerry Z. 2018. The Tyranny of Metrics. Princeton University Press.

Ostrom, Elinor. 2007. “Collective Action Theory.” In The Oxford Handbook of Comparative Politics. oxfordhandbooks.com.

“Staying with the Trouble: Designing a Values-Enacted Academy.” 2020. Impact of Social Sciences. April 23, 2020.