Adventures in Signal Processing and Open Science

Month: September, 2013

A look at the process of submitting articles to OA journals | Open Science

A look at the process of submitting articles to OA journals | Open Science.

If you are thinking of publishing your article in an open access model, there are usually two paths to choose from. One – you can publish in Green OA, which means adding the paper to a specially prepared repository (self-archiving). Two – you can choose the Gold OA model and submit your article to an OA journal, where it will be corrected, peer-reviewed then published. At this point I would like to briefly describe the process of submitting articles to OA journals for those who are considering just that option. It is a general description, and various steps may differ depending on the publisher and the journal.


DAT versioned data

DAT versioned data

I just came across this presentation shared by Karthik Ram on Twitter (see also It describes a project that tries to create a sort of git for data. It seems to be at a very early stage yet, but looks very interesting.

Publishing Models With Open Review

All of the platforms I mentioned previously are in the third and fourth quadrants above and they are all review platforms decoupled from the actual publication of papers. Where PubPeer and seem to aim at being independent post-publication peer review and discussion platforms, Publons seems to be trying both that approach as well as offering themselves as “review service provider” to journals.

One of the visions I have seen people describe is that of a publishing model where papers are published as “preprints”, they get reviewed openly in some forum for that purpose – for example the platforms I mentioned. Journals are then simply formed as collections of these openly accessible papers based on their openly accessible reviews. This kind of journals does not own the papers (in terms of copyright and reduced access) and the value they add is simply to help readers filter through the jungle of available papers as well as put a “seal of quality” on the collected papers. This is probably in the more idealistic and radical end of the spectrum, but I like the idea.

A less radical but similar idea is that of portable peer review where authors can submit papers for review to a “review service provider” and then take the review comments to a journal (after probably altering the paper according to review comments) for consideration for publication. Should the journal not be interested, you can take it to another journal – still along with the same review comments. This could save some work and time in the review process. This seems to be what for example Peer Evaluation and Libre as well as Publons are doing.

I am particularly interested in how well Peer Evaluation and Libre will fare. These two platforms are created by two organisations that sound to be non-profit and seem quite open and idealistic. Collective Developments is behind From their homepage:

Peer Evaluation is an Open Access initiative allowing for the dissemination and evaluation
of scholarly works. As a supplement to quantitative reputation metrics (H index, citation counts…),
Peer Evaluation comprises a qualitative reputation system powered by peers alone.
Finally, its business model is the one of a community interest project. – See more at:

The organisation behind Libre is Open Scholar C.I.C. Their homepage states:

Open Scholar C.I.C. is a not-for-profit organisation whose activities, assets and profits are dedicated to the purpose of providing benefit to the scientific community. Our mision is to develop ideas and tools that promote open and transparentscientific collaboration for a more fast, efficient and naturalorganisation, evaluation and dissemination of global knowledge.

Particularly Open Scholar sounds interesting to me. As they also write:

Our community is open to new members who wish to join efforts towards building a new culture of transparent academic collaboration for the benefit of global knowledge.

Well, I just might give it a shot and accept that invitation. I am eager to try to find some way to contribute to this open science publishing movement.

Inspiring Open Science Talk by Arvind Narayanan

I came across this talk by Arvind Narayanan a couple of days ago. He talks about how he has succeeded in publishing in unconventional ways and makes suggestions on how we might change the current publishing model – a topic that has become a bit of a hobby of mine lately:

Openness And Anonymity in Peer Review

About a month ago, I attempted to give an overview of a few open review platforms. In relation to that, the question of anonymity of the reviewers came up. I have given it some more thought and would like to discuss some of these thoughts.

First of all, when I talk about open/closed and anonymous/identified review, I would like to point out that I consider these two independent “dimensions” of the nature of scientific peer review:

Anonymous Identified
Closed 1 2
Open 3 4

So, reviews can either be closed or open and at the same time, anonymous or identified.

Traditional journals have, to the best of my knowledge, more or less all been in the first “quadrant” above, i.e. both closed in the sense that they are not available to the readers of the journal after publication; and anonymous in the sense that the authors cannot see who the reviewers are. I do not know any examples of publishing in the second quadrant. UPDATE: Matt Hodgkinson and Magnus Rattray were kind enough to give me two examples of review in the second quadrant as well. Matt Hodgkinson:

…reviewers at the BMJ have to sign their comments and this is optional at PLOS ONE Neither publishes the reviews.

Magnus Rattray:

In the NIPS conference reviewers and programme chairs are identified to each other but anonymous to the authors. This introduces some advantages of openness (improved review quality, more chance to identify conflicts) while maintaining some advantages of anonymity (fear of reprisals or ruined friendships).

Publishing models and experiments are starting to pop up in the third and fourth quadrants – open review, where the reviewers are anonymous in some cases and in other cases identified.

Let’s take a look at the anonymity issue. I am not sure there is a clear answer to what is best, but there are clearly both advantages and drawbacks of anonymous resp. identified reviews that I think we need to discuss. Below I list a few advantages and drawbacks of anonymity where I assume that a drawback of anonymous review is an advantage of identified review and vice versa.

  • Reviewers do not get credit for their work. They cannot, for example, reference particular reviews in their CVs as they can with publications.
  • It is relatively “easy” for a reviewer to provide unnecessarily blunt or harsh critique.
  • It is difficult to guess if the reviewer has any conflict of interest with the authors by being, for example, a competing researcher interested in stalling the paper’s publication.


  • Reviewers do not have to fear “payback” for an unfavourable review that is perceived as unfair by the authors of the work.
  • Some (perhaps especially “high-profile” senior faculty members) reviewers might find it difficult to find the time to provide as thorough a review as they would ideally like to, yet would still like to contribute and can perhaps provide valuable experienced insight. They can do so without putting their reputation on the line.

What else am I missing?

I have put a publicly editable version of this list on Google Drive – feel free to add points if you like, or comment below:

The Open Access Button

Recently, David Carroll and Joseph McArthur, medical resp. pharmacology students from London, came up with a great little idea: why not develop an easy way for people looking to read scientific papers online to report when they encounter a paper they cannot access, because they have to pay for it? That is, when they “hit a paywall”. David and Joseph set out to realise this idea by developing a browser button that users can click when that happens. The idea is to record reported incidents in a database to calculate statistics of how large this problem is. The idea has been well received and lots of people seem to have joined the effort to help develop it. You can read more about it here:

and follow their progress here:

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow


Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog