Adventures in Signal Processing and Open Science

Month: January, 2016

Thoughts about Scholarly HTML

The company science.ai is working on a draft standard (or what I guess they hope will eventually become a standard) called Scholarly HTML. The purpose of this seems to be to standardise the way scholarly articles are structured as HTML in order to use that as a more semantic alternative to for example PDF which may look nice but does nothing to help understand the structure of the content, probably more the contrary.
They present their proposed standard in this document. They also seem to have formed a community group at the World Wide Web Consortium. It appears this is not a new initiative. There was already a previous project called Scholarly HTML, but science.ai seem to be trying to help take the idea further from there. Martin Fenner wrote a bit of background story behind the original Scholarly HTML.
I read science.ai’s proposal. It seems like a very promising initiative because it would allow scholarly articles across publishers to be understood better by, not least, algorithms for content mining, automated literature search, recommender systems etc. It would be particularly helpful if all publishers had a common standard for marking up articles and HTML seems a good choice since you only need a web browser to display it. This is also another nice feature about it. I tend to read a lot on my mobile phone and tablet and it really is a pain when the content does not fit the screen. This is often the case with PDF which does not reflow too well in the apps I use for viewing. Here HTML would be much better, not being physical page-focused like PDF.
I started looking at this proposal because it seemed like a natural direction to look further in from my crude preliminary experiments in Publishing Mathematics in e-books.
After reading the proposal, a few questions arose:

  1. The way the formatting of references is described, it seems to me as if references can be of type “schema:Book” or “schema:ScholarlyArticle”. Does this mean that they do not consider a need to cite anything but books or scholarly articles? I know that some people hold the IMO very conservative view that the reference list should only refer to peer-reviewed material, but this is too constrained and I certainly think it will be relevant to cite websites, data sets, source code etc. as well. It should all go into the reference list to make it easier to understand what the background material behind a paper is. This calls for a much richer selection of entry types. For example Biblatex’ entry types could serve as inspiration.
  2. The authors and affiliations section is described here. Author entries are described as having:

    property=”schema:author” or property=”schema:contributor” and a typeof=”sa:ContributorRole”

    I wonder if this way of specifying authors/contributors makes it possible to specify more granular roles or multiple roles for each author like for example Open Research Badges?

  3. Under article structure, they list the following types of sections:

    Sections are expected to be typed using the typeof attribute. The following typeof values are currently understood:

    sa:Funding (which has its specific structure)
    sa:Abstract
    sa:MaterialsAndMethods
    sa:Results
    sa:Conclusion
    sa:Acknowledgements
    sa:ReferenceList

    I think there is a need for more types of sections. I for example also see articles containing Introduction, Analysis, and Discussion sections and I am sure there must be more that I have not thought of.

Advertisement

Comments on “On the marginal cost of scholarly communication”

A new science publisher seems to have appeared recently, or publisher is probably not the right word… science.ai is apparently neither a journal nor a publisher per se. Rather, they seem to be focusing on developing a new publishing platform that provides a modern science publishing solution, built web-native from the bottom up.

The idea feels right and in my opinion, Standard Analytics (the company behind science.ai) could very likely become an important player in a future where I think journals will to a large extent be replaced by recommender systems and where papers can be narrowly categorised by topic rather than by where they were published. Go check out their introduction to their platform afterwards…

A few days ago, I became aware that they had published an article or blog post about “the marginal cost of scholarly communication” in which they examine what it costs as a publisher to publish scientific papers in a web-based format. This is a welcome contribution to the ongoing discussion of what is actually a “fair cost” of open access publishing, considering the very pricey APCs that some publishers charge (see for example Nature Publishing Group). In estimating this marginal cost they define

the minimum requirements for scholarly communication as: 1) submission, 2) management of editorial workflow and peer review, 3) typesetting, 4) DOI registration, and 5) long-term preservation.

They collect data on what these services cost using available vendors of such services and alternatively consider what they would cost if you assume the publisher has software available for performing the typesetting etc. (perhaps they have developed it themselves or have it available as free, open-source software). For the case where the all services are bought from vendors, they find that the marginal cost of publishing a paper is between $69 and $318. For the case where the publisher is assumed to have all necessary software available and basically only needs to pay for server hosting and registration of DOIs, the price is found to be dramatically lower – between $1.36 and $1.61 per paper.

Marginal Cost

This all sounds very interesting, but I found this marginal cost a bit unclear. They define the marginal cost of publishing a paper as follows:

The marginal cost only takes into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations.

OK, but here I get in doubt what they categorise as normal business operations. One example apparently is the membership cost to CrossRef for issuing DOIs:

As our focus is on marginal cost, we excluded the membership fee from our calculations.

However, in a box at the end of the article they mention eLife as a specific example:

Based on their 2014 annual report (eLife Sciences, 2014), eLife spent approximately $774,500 on vendor costs (equivalent to 15% of their total expenses). Given that eLife published 800 articles in 2014, their marginal cost of scholarly communication was $968 per article.

I was not able to find the specific amount of $774,500 myself in eLife’s annual report but, assuming it is correct, how do we know whether for example CrossRef membership costs are included in eLife’s vendor costs? If they are, this estimate of eLife’s marginal cost of publication is not comparable to marginal costs calculated in Standard Analytics’ paper as mentioned above.

We could also discuss how relevant the marginal cost is, at least if you are in fact

an agent looking to start an independent, peer-reviewed scholarly journal

I mean, in that situation you are actually looking to start from scratch and have to take all those “fixed costs related to normal business operations” into account…

I should also mention that I have highlighted the quotes above from the paper via hypothes.is here.

Typesetting Solutions

Standard Analytics seem to assume that typesetting will have to include conversion from Microsoft Word, LaTeX etc. and suggest Pandoc as a solution and ast the same time point out that there is a lack of such freely available solutions for those wishing to base their journal on their own software platform. If a prospective journal were to restrict submissions to be in LaTeX format, there are also solutions such as LateXML and ShareLaTeX‘s open source code could be used for this purpose as well. Other interesting solutions are also being developed and I think it is worth keeping an eye on initiatives like PeerJ’s paper-now. Finally, it could also be an idea to simply ask existing free, open-access journals how they handle these things (which I assume they do in a very low-cost way). One example I can think of is the Journal of Machine Learning Research.

Other Opinions

I just became aware that Cameron Neylon also wrote a post: The Marginal Costs of Article Publishing – Critiquing the Standard Analytics Study about Standard Analytics’ paper which I will go and read now…

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog