Jon Tennant - The Cost of Knowledge

“The cost of knowledge is extraordinarily low and the cost of withholding knowledge is extraordinarily high”

Jon Tennant, a palaeontologist and Batman of Open Access sat down with us (over Skype) to discuss the value of open access and wade through the mud of scientific publishing. Jon is relentless, and there was never a sense of deflation over the current situation, only a drive to push for more transparency and actively pursue new outlets. 

For those in the open access community, it would come as no surprise that prior to the interview, Jon sent us what my co-Science: Disrupt(er) Gemma, considered a brilliant mind dump of awesomeness – which is to say he concisely, eloquently and humorously spilt the beans on all things publishing. 

So with that said, here we go: 


SD: When it comes to scientific publishing, there’s enormous cost involved - but what exactly are we paying for? 

JT: In the case of a journal like Science or Nature, it’s prestige. You get the satisfaction of knowing you've beaten the editorial criteria that results in a 90%+ rejection rate. You've also reduced the content of your work to two pages, hidden all the data in supplementary material, squashed the images allowed in the text to thumbnail size...and then paid to allow access to that single file. But a Nature publication defines your career, and that’s why researchers pine for them. Some journals charge as much as $6000 (including VAT – often excluded from APC prices up front), while others, such as PeerJ offer $99/author for a lifetime of publishing (one per year) – There are also others that are free to read and are funded by external grants or funders. How is that massive disparity offering good value for money to those who fund us? 

SD: You’ve certainly not been holding back on the subject of journal paywalls and embargoes… 

JT: Paywalls are a monstrosity, designed to prevent access to those without financial or academic privilege. If you think about it, it explicitly reveals the business models of traditional publishers, which is, preventing access to knowledge. Green open access embargo periods are a hilarious irony where publishers truly have shot themselves in the foot. If they offer a competitive product, then why do they need to enforce an embargo period on what is essentially a peer reviewed word document? Surely what they offer is substantially more value? But there’s something much more sinister to consider; recently a group of researchers saw fit to publish Ebola research in a ‘glamour magazine’ behind a paywall; they cared more about brand association than the content. This could be life-saving research, why did they not at least educate themselves on the preprint procedure and archive the paper in the pre-referee stage so that is was freely accessible (which is compliant with the journal policy in this case)? This is because we have distorted our incentive and assessment system to the extent that where something is published matters more than what we publish. 

“the currency represented by publishing is used as a proxy for the true currency of research, which is quality” 

SD: Publications are the currency of academia and getting into Nature or another top tier journal suggests a particularly impactful, sexy and noteworthy piece of research within these circles, right? 

JT: At the end of the day if you were to apply as a post-doc with a list of publications in journals with a comparatively low impact factor, you probably wouldn’t be let in the door. The thing to reflect on, is that the currency represented by publishing and journal brands is used as a proxy for the true currency of research, which is quality. Another irony here is that we continuously commit to such nonsensical practices, despite supposedly being the harbingers of evidence and reason. 

SD: Impact factor is a hugely flawed system that is deemed a measure of journal quality but how did it come to be in the first place? 

JT: The impact factor was originally designed to assess which journals are being cited most, so that librarians can check which subscriptions are worth renewing! How it became used to assess individuals and research articles is through a combination of pressure, overburdened assessment systems, and laziness, along with having it forced down our throats by publishers at every turn. There are documents like the Leiden Manifesto, The Metrics Tide, and the San Francisco Declaration on Research Assessment that all point out problems with the IF, and there are some interesting potential alternatives, such as a richer suite of article level metrics. 

"The irony of it being a gold standard is that it is neither golden nor a standard." 

SD: One important area to cover is peer review, a process that is widely seen as a gold standard, effectively a banner for the rigour of the scientific method, but it’s not quite the case is it? 

JT: The irony of it being a gold standard is that it is neither golden nor a standard. A standard is supposed to be transparent, reproducible, and objective. Peer review is closed, exclusive, and secretive – how is that objective? In my opinion, it should remain an unpaid service in order to avoid spurious motives, but there are better ways of providing recognition through, for example,  Publons - which makes reviews citable and creditable. Let’s face it, no peer review process will ever be perfect because it involves humans. 

SD: There’s an interesting variability between fields when it comes to attitudes towards publishing. High energy physics makes up an enormous proportion of arXiv posts when compared to many other areas of maths and physics. Perhaps this is just an emergent property of having to share information widely in a field like HEP - CERN would be pretty ineffective if they couldn't share data among fellow researchers. 

JT: Let’s not forget the Web was invented by Tim Berners-Lee for the promotion and dissemination of information, specifically scientific data coming out of CERN.  Biology now has bioRxiv but the uptake has been comparatively underwhelming so far. 

“The RIO journal […] is a really valuable service as it publishes items from the whole research process.” 

SD: Developing on arXiv, what is your opinion on overlay journals? 

JT: Overlay journals such as Discrete Analysis from Tim Gowers are certainly a step in the right direction. And yes, his celebrity status and personal prestige could be one factor in its success, but what’s great is that it puts the research back into the hands of the researchers. With ScienceOpen, anyone can create an overlay journal as we host the entire arXiv and you can administer peer review within that framework of overlays or collections. 

SD: Which essentially devalues the publishing process... 

JT: Sure - researching communities do all of the research, they display the power to organise internally in order to peer review, hosting is very cheap, and there's a form of version control utilising preprints...and all at a fraction of the costs, eliminating the need for journals. Current estimates are honing in on a per-article cost of $20-$30 when researchers organise this themselves independent of legacy publishers. Journals are a hangover from the print age when articles of the same theme were bundled together and sent off, but now we can do this ourselves through collections and not have the cost burden (literally $ billions every year!) attached with traditional publishing systems. 

SD: So are there any other promising alternatives to traditional publishing frameworks? 

JT: The RIO journal (Research Ideas and Outcomes) is one example, I think this is a really valuable service as it publishes items from the whole research process. Rio is openly collaborative too, as it uses the ARPHA writing platform. F1000 is also fantastic - there has been a huge movement for open post-publication peer review like at F1000Research and my experience is that it is rapid and thorough. In fact, F1000’s system is probably the best I've used; the reviews have been swift and constructive and using it’s version control you can retrieve the reviews and update the manuscript via Overleaf quite easily. 

SD: What we need is a GitHub for science! 

JT: A GitHub for science is absolutely something we academics need to build. Version control, like that offered by Git, would be extremely important. It has everything we need: open collaboration, open peer review, instant publishing, and version control. Overlay some sort of community-based badge system, and that replaces the need for journal brand ribbons. The best thing is that it’s completely open and transparent, which means the entire process is verifiable, creditable and accountable. 

SD: And what about preprints? 

JT: There is pretty much no downside to posting preprints. Some authors feel they might get scooped, but a pre-print is a statement of authority about an idea. Some feel journals won’t accept their work, which is not true – even Nature and Science accept papers published as preprints. Common objections to their use are precisely the reason to use them! And most importantly they enable free, rapid, and open communication of research, which is something we should all be striving for. 

“These things are being discussed but that’s the issue, we’re only discussing it.” 

SD: Whose responsibility is it to pursue change? 

JT: The [mis-]use of impact factor is all about power dynamics - it’s a bit unfair to ask junior researchers to take on the burden for this change. Instead, motivation should be from the higher ranks of academia, motivated by grass roots campaigning from all levels. However, what you find is a frustrating offloading of accountability between librarians, publishers, researchers, heads of department and the funders. These things are being discussed but that’s the issue, we’re only discussing it. 

SD:  Disruption has to come from academia. 

JT: Absolutely - but there is no money, little organisation, no time and the impetus just isn't there when careers are at stake. This is a world where blacklisting from communities occurs for speaking out on these issues. People are told to be quiet on the issues and this is simply because those at the top don’t like the idea of change. The status quo has been good to them. Therefore there are few system-wide incentives to adopt cheaper, more efficient models, despite this being our duty as researchers to provide the best and most efficient bang for buck - and pass that back to the public that often fund us in the first place... 


We'd like to thank Jon for Skyping in to chat, he's just awesome. There's still so much more to cover so rest assured we'll be returning to the murky world of publishing soon. 

Jon can be found here: @Protohedgehog 

And be sure to check out ScienceOpen: @Science_Open