Adventures in Signal Processing and Open Science

iTWIST’16 Keynote Speakers: Karin Schnass

Last week we heard about the first of our keynote speakers at this years’ iTWIST workshop in August – Lieven Vandenberghe.

Karin SchnassNext up on my list of speakers is Karin Schnass. Karin Schnass is an expert on dictionary learning and heading an FWF-START project on dictionary learning in the Applied Mathematics group in the Department of Mathematics at the University of Innsbruck.

Karin Schnass joined the University of Innsbruck in December 2014 as part of an Erwin Schrödinger Research Fellowship where she returned from a research position at University of Sassari, Italy, from 2012 to 2014. She originally graduated from University of Vienna, Austria, with a master in mathematics with distinction: “Gabor Multipliers – A Self-Contained Survey”. She graduated in 2009 with a PhD in computer, communication and information sciences from EPFL, Switzerland: “Sparsity & Dictionaries – Algorithms & Design”. Karin Schnass has, among other things, introduced the iterative thresholding and K-means (ITKM) algorithms for dictionary learning and published the first theoretical paper on dictionary learning (on arXiv) with Rémi Gribonval.

At our workshop this August, I am looking forward to hearing Karin Schnass talk about Sparsity, Co-sparsity and Learning. In compressed sensing, the so-called synthesis model has been the prevailing model since the beginning. First, we have the measurements:

y = A x

From the measurements, we can reconstruct the sparse vector x by solving this convex optimisation problem:

minimize |x|_1 subject to |y - A x|_2 < ε

If the vector x we can observe is not sparse, we can still do this if can find a sparse representation α of x in some dictionary D:

x = D α

where we take our measurements of x using some measurement matrix M:

y = M x = M D α = A α

and we reconstruct the sparse vector α as follows:

minimize |α|_1 subject to |y - M D α|_2 < ε

The above is called the synthesis model because it works by using some sparse vector α to synthesize the vector x that we observe. There is an alternative to this model, called the analysis model, where we analyse an observed vector x to find some sparse representation β of it:

β = D' x

Here D’ is also a dictionary, but it is not the same dictionary as in the synthesis case. We can now reconstruct the vector x from the measurements y as follows:

minimize |D' x|_1 subject to |y - M x|_2 < ε

Now if D is a (square) orthonormal matrix such as an IDFT, we can consider D’ a DFT matrix and they are simply each other’s inverse. In this case, the synthesis and analysis reconstruction problems above are equivalent. The interesting case is when the synthesis dictionary D is a so-called over-complete dictionary – a fat matrix. The analysis counterpart of this is a tall analysis dictionary D’ which behaves differently than the analysis dictionary.

Karin will give an overview over the synthesis and the analysis model and talk about how to learn dictionaries that are useful for either case. Specifically, she plans to tell us about (joint work with Michael Sandbichler):

While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions.

Advertisement

My problem with ResearchGate and Academia.edu

TLDR; you can find my publications in Aalborg University’s repository or ORCID.

ResearchGate – wow, a social network for scientists and researchers you might think. But think again about the ‘wow’. At least I am not so impressed. Here’s why…

I once created a profile on ResearchGate out of curiosity. It initially seemed like a good idea, but I soon realised that this would just add to the list of profile pages I would have to update, sigh. But so far I have kept my profile for fear of missing out. What if others cannot find my publications if I am not on ResearchGate? And so on…

But updating my profile is just the tip of the iceberg. What I find far more problematic about the site is their keen attempts to create a walled garden community. Let me explain what I mean. Take this paper for example (this is not a critique of this paper – in fact I think this is an example of a very interesting paper): One-Bit Compressive Sensing of Dictionary-Sparse Signals by Rich Baraniuk, Simon Foucart, Deanna Needell, Yaniv Plan, and Mary Wootters:

  1. First of all, when you click the link to the paper above you cannot even see it without logging in on ResearchGate.
    “What’s the problem?”, you might think. “ResearchGate is free – just create an account and log in”. But I would argue that open access is not open access if you have to register and log in to read the paper – even if it is free.
  2. Once you log in and can finally see the paper, it turns out that you cannot read the actual paper. This appears to be because the author has not uploaded the full text and ResearchGate displays a button where you can “Request full-text” to ask the author to provide it.
    “Now what?!”, you are thinking. “This is a great service to both readers and authors, making it easy to connect authors to their readers and enabling them to easily give the readers what they are looking for” – wrong! This is a hoax set up up by ResearchGate to convince readers that they are a great benevolent provider of open access literature.

The problem is that the paper is already accessible here: on arXiv – where it should be. ResearchGate has just scraped the paper info from arXiv and are trying to persuade the author to upload it to ResearchGate as well to make it look like ResearchGate is the place to go to read this paper. They could have chosen to simply link to the paper on arXiv, making it easy for readers to find it straight away. But they will not do that, because they want readers to stay inside their walled garden, controlling the information flow to create a false impression that ResearchGate is the only solution.

As if this was not enough, there are yet other reasons to reconsider your membership. For example, they are contributing to journal impact factor abuse-like metric obsession with their ResearchGate score. The problem is that this score is not transparent and not reproducible contributing only to an obsession with numbers driving “shiny” research and encouraging gaming of metrics.

I don’t know about you, but I have had enough – I quit!

…The clever reader has checked and noticed that I have not deleted my ResearchGate profile. Why? Am I just another hypocrite? Look closer – you will notice that the only publication on my profile is a note explaining why I do not wish to use ResearchGate. I think it is better to actively inform about my choice and attack the problem from the inside rather than just staying silently away.

Update 6th of July 2016…

I have now had a closer look at Academia.edu as well and it turns out that they are doing more or less the same, so I have decided to quit this network as well. They do not let you read papers without logging in and they also seem to have papers obviously scraped from arXiv, waiting for the author to upload the full-text version and ignoring the fact that it is available on arXiv. Again, they want to gather everything in-house to make it appear as if they are the rightful gate-keepers of all research.

As I did on ResearchGate as well, I have left my profile on Academia.edu with just a single publication which is basically this blog post (and a publication which is a link to my publications on Aalborg University’s repository.

iTWIST’16 Keynote Speakers: Lieven Vandenberghe

The workshop program has been ready for some time now, and we are handling the final practicalities to be ready to welcome you in Aalborg in August for the iTWIST’16 workshop. So now I think it is time to start introducing you to our – IMO – pretty impressive line-up of keynote speakers.

Lieven VandenbergheFirst up is Prof. Lieven Vandenberghe from UCLA. Prof. Vandenberghe is an expert on convex optimisation and signal processing and is – among other things – well known for his fundamental textbook “Convex Optimization” together with Steven Boyd.

Lieven Vandenberghe is Professor in the Electrical Engineering Department at UCLA. He joined UCLA in 1997, following postdoctoral appointments at K.U. Leuven and Stanford University, and has held visiting professor positions at K.U. Leuven and the Technical University of Denmark. In addition to “Convex Optimization”, he also edited the “Handbook of Semidefinite Programming” with Henry Wolkowicz and Romesh Saigal.

At iTWIST, I am looking forward to hearing him speak about Semidefinite programming methods for continuous sparse optimization. So far, it is my impression that most theory and literature about compressed sensing and sparse methods has relied on discrete dictionaries consisting of a basis or frame of individual dictionary atoms. If we take the discrete Fourier transform (DFT) as an example, the dictionary has fixed atoms corresponding to a set of discrete frequencies. More recently, theories have started emerging that allow continuous dictionaries instead (see for example also the work of Ben Adcock, Anders Hansen, Bogdan Roman et al. as well). As far as I understand – a generalisation that in principle allows you to get rid of the discretised atoms and consider any atoms on the continuum “in between” as well. This is what Prof. Vandenberghe has planned for us so far (and this is joint work with Hsiao-Han Chao):

We discuss extensions of semidefinite programming methods for 1-norm minimization over infinite dictionaries of complex exponentials, which have recently been proposed for superresolution and gridless compressed sensing.

We show that results related to the generalized Kalman-Yakubovich-Popov lemma in linear system theory provide simple constructive proofs for the semidefinite representations of the penalties used in these problems. The connection leads to extensions to more general dictionaries associated with linear state-space models and matrix pencils.

The results will be illustrated with applications in spectral estimation, array signal processing, and numerical analysis.

iTWIST’16 is taking shape

This year’s international Traveling Workshop on Interactions Between Sparse Models and Technology is starting to take shape now. The workshop will take place on the 24th-26th of August 2016 in Aalborg. See also this recent post about the workshop.

1488401668_b968d06dff_b

By Alan Lam (CC-BY-ND)

Aalborg is a beautiful city in the northern part of Denmark and what many of you probably do not know is that Aalborg actually scored “Europe’s happiest city” in a recent survey by the European Commission.

It is now possible to register for the workshop and if you are quick and register before July, you get it all for only 200€. That is, three days of workshop, including lunches and a social event with dinner on Thursday evening.

There are plenty of good reasons to attend the workshop. In addition to the many exciting contributed talks and posters that we are now reviewing, we have an impressive line-up of 9 invited keynote speakers! I will be presenting what the speakers have in store for you here on this blog in the coming days.

 

international Traveling Workshop on Interactions between Sparse models and Technology

international Traveling Workshop on Interactions between Sparse models and Technology

On the 24th to 26th of August 2016, we are organising a workshop called international Traveling Workshop on Interactions between Sparse models and Technology (iTWIST). iTWIST is a biennial workshop organised by a cross-European committee of researchers and academics on theory and applications of sparse models in signal processing and related areas. The workshop has so far taken place in Marseille, France in 2012 and in Namur, Belgium in
2014.

I was very excited to learn last fall that the organising committee of the previous two instalments of the workshop had the confidence to let Morten Nielsen and me organise the workshop in Aalborg (Denmark) in 2016.

Themes

This year, the workshop continues many of the themes from the first two years and adds a few new:

  • Sparsity-driven data sensing and processing (e.g., optics, computer vision, genomics, biomedical, digital communication, channel estimation, astronomy)
  • Application of sparse models in non-convex/non-linear inverse problems (e.g., phase retrieval, blind deconvolution, self calibration)
  • Approximate probabilistic inference for sparse problems
  • Sparse machine learning and inference
  • “Blind” inverse problems and dictionary learning
  • Optimization for sparse modelling
  • Information theory, geometry and randomness
  • Sparsity? What’s next?
    • Discrete-valued signals
    • Union of low-dimensional spaces,
    • Cosparsity, mixed/group norm, model-based, low-complexity models, …
  • Matrix/manifold sensing/processing (graph, low-rank approximation, …)
  • Complexity/accuracy tradeoffs in numerical methods/optimization
  • Electronic/optical compressive sensors (hardware)

I would like to point out here, as Igor Carron mentioned recently, that HW designs are also very welcome at the workshop – it is not just theory and thought experiments. We are very interested in getting a good mix between theoretical aspects and applications of sparsity and related techniques.

Keynote Speakers

I am very excited to be able to present a range of IMO very impressive keynote speakers covering a wide range of themes:

  • Lieven Vandenberghe – University of California, Los Angeles – homepage
  • Gerhard Wunder – TU Berlin & Fraunhofer Institute – homepage
  • Holger Rauhut – RWTH Aachen – homepage
  • Petros Boufounos – Mitsubishi Electric Research Labs – homepage
  • Florent Krzakala and Eric Tramel – ENS Paris – homepage
  • Phil Schniter – Ohio State University – homepage
  • Karin Schnass – University of Innsbruck – homepage
  • Rachel Ward – University of Texas at Austinhomepage
  • Bogdan Roman – University of Cambridge – homepage

The rest of the workshop is open to contributions from the research community. Please send your papers (in the form of 2-page extended abstracts – see details here). Your research can be presented as an oral presentation or a poster. If you prefer, you can state your preference (paper or poster) during the submission process, but we cannot guarentee that we can honour your request and reserve the right to assign papers to either category in order to put together a coherent programme. Please note that we consider oral and poster presentations equally important – poster presentations will not be stowed away in a dusty corner during coffee breaks but will have one or more dedicated slots in the programme!

Open Science

In order to support open science, we strongly encourage authors to publish any code or data accompanying your paper in a publicly accessible repository, such as GitHub, Figshare, Zenodo etc.

The proceedings of the workshop will be published in arXiv as well as SJS in order to make the papers openly accessible and encourage post-publication discussion.

Thoughts about Scholarly HTML

The company science.ai is working on a draft standard (or what I guess they hope will eventually become a standard) called Scholarly HTML. The purpose of this seems to be to standardise the way scholarly articles are structured as HTML in order to use that as a more semantic alternative to for example PDF which may look nice but does nothing to help understand the structure of the content, probably more the contrary.
They present their proposed standard in this document. They also seem to have formed a community group at the World Wide Web Consortium. It appears this is not a new initiative. There was already a previous project called Scholarly HTML, but science.ai seem to be trying to help take the idea further from there. Martin Fenner wrote a bit of background story behind the original Scholarly HTML.
I read science.ai’s proposal. It seems like a very promising initiative because it would allow scholarly articles across publishers to be understood better by, not least, algorithms for content mining, automated literature search, recommender systems etc. It would be particularly helpful if all publishers had a common standard for marking up articles and HTML seems a good choice since you only need a web browser to display it. This is also another nice feature about it. I tend to read a lot on my mobile phone and tablet and it really is a pain when the content does not fit the screen. This is often the case with PDF which does not reflow too well in the apps I use for viewing. Here HTML would be much better, not being physical page-focused like PDF.
I started looking at this proposal because it seemed like a natural direction to look further in from my crude preliminary experiments in Publishing Mathematics in e-books.
After reading the proposal, a few questions arose:

  1. The way the formatting of references is described, it seems to me as if references can be of type “schema:Book” or “schema:ScholarlyArticle”. Does this mean that they do not consider a need to cite anything but books or scholarly articles? I know that some people hold the IMO very conservative view that the reference list should only refer to peer-reviewed material, but this is too constrained and I certainly think it will be relevant to cite websites, data sets, source code etc. as well. It should all go into the reference list to make it easier to understand what the background material behind a paper is. This calls for a much richer selection of entry types. For example Biblatex’ entry types could serve as inspiration.
  2. The authors and affiliations section is described here. Author entries are described as having:

    property=”schema:author” or property=”schema:contributor” and a typeof=”sa:ContributorRole”

    I wonder if this way of specifying authors/contributors makes it possible to specify more granular roles or multiple roles for each author like for example Open Research Badges?

  3. Under article structure, they list the following types of sections:

    Sections are expected to be typed using the typeof attribute. The following typeof values are currently understood:

    sa:Funding (which has its specific structure)
    sa:Abstract
    sa:MaterialsAndMethods
    sa:Results
    sa:Conclusion
    sa:Acknowledgements
    sa:ReferenceList

    I think there is a need for more types of sections. I for example also see articles containing Introduction, Analysis, and Discussion sections and I am sure there must be more that I have not thought of.

Comments on “On the marginal cost of scholarly communication”

A new science publisher seems to have appeared recently, or publisher is probably not the right word… science.ai is apparently neither a journal nor a publisher per se. Rather, they seem to be focusing on developing a new publishing platform that provides a modern science publishing solution, built web-native from the bottom up.

The idea feels right and in my opinion, Standard Analytics (the company behind science.ai) could very likely become an important player in a future where I think journals will to a large extent be replaced by recommender systems and where papers can be narrowly categorised by topic rather than by where they were published. Go check out their introduction to their platform afterwards…

A few days ago, I became aware that they had published an article or blog post about “the marginal cost of scholarly communication” in which they examine what it costs as a publisher to publish scientific papers in a web-based format. This is a welcome contribution to the ongoing discussion of what is actually a “fair cost” of open access publishing, considering the very pricey APCs that some publishers charge (see for example Nature Publishing Group). In estimating this marginal cost they define

the minimum requirements for scholarly communication as: 1) submission, 2) management of editorial workflow and peer review, 3) typesetting, 4) DOI registration, and 5) long-term preservation.

They collect data on what these services cost using available vendors of such services and alternatively consider what they would cost if you assume the publisher has software available for performing the typesetting etc. (perhaps they have developed it themselves or have it available as free, open-source software). For the case where the all services are bought from vendors, they find that the marginal cost of publishing a paper is between $69 and $318. For the case where the publisher is assumed to have all necessary software available and basically only needs to pay for server hosting and registration of DOIs, the price is found to be dramatically lower – between $1.36 and $1.61 per paper.

Marginal Cost

This all sounds very interesting, but I found this marginal cost a bit unclear. They define the marginal cost of publishing a paper as follows:

The marginal cost only takes into account the cost of producing one additional scholarly article, therefore excluding fixed costs related to normal business operations.

OK, but here I get in doubt what they categorise as normal business operations. One example apparently is the membership cost to CrossRef for issuing DOIs:

As our focus is on marginal cost, we excluded the membership fee from our calculations.

However, in a box at the end of the article they mention eLife as a specific example:

Based on their 2014 annual report (eLife Sciences, 2014), eLife spent approximately $774,500 on vendor costs (equivalent to 15% of their total expenses). Given that eLife published 800 articles in 2014, their marginal cost of scholarly communication was $968 per article.

I was not able to find the specific amount of $774,500 myself in eLife’s annual report but, assuming it is correct, how do we know whether for example CrossRef membership costs are included in eLife’s vendor costs? If they are, this estimate of eLife’s marginal cost of publication is not comparable to marginal costs calculated in Standard Analytics’ paper as mentioned above.

We could also discuss how relevant the marginal cost is, at least if you are in fact

an agent looking to start an independent, peer-reviewed scholarly journal

I mean, in that situation you are actually looking to start from scratch and have to take all those “fixed costs related to normal business operations” into account…

I should also mention that I have highlighted the quotes above from the paper via hypothes.is here.

Typesetting Solutions

Standard Analytics seem to assume that typesetting will have to include conversion from Microsoft Word, LaTeX etc. and suggest Pandoc as a solution and ast the same time point out that there is a lack of such freely available solutions for those wishing to base their journal on their own software platform. If a prospective journal were to restrict submissions to be in LaTeX format, there are also solutions such as LateXML and ShareLaTeX‘s open source code could be used for this purpose as well. Other interesting solutions are also being developed and I think it is worth keeping an eye on initiatives like PeerJ’s paper-now. Finally, it could also be an idea to simply ask existing free, open-access journals how they handle these things (which I assume they do in a very low-cost way). One example I can think of is the Journal of Machine Learning Research.

Other Opinions

I just became aware that Cameron Neylon also wrote a post: The Marginal Costs of Article Publishing – Critiquing the Standard Analytics Study about Standard Analytics’ paper which I will go and read now…

Peer Evaluation of Science

This is a proposal for a system for evaluation of the quality of scientific papers by open review of the papers through a platform inspired by StackExchange. I have reposted it here from The Self-Journal of Science where I hope my readers will go and comment on it: http://www.sjscience.org/article?id=401. The proposal is also intended as a contribution to #peerrevwk15 on Twitter.

I have chosen to publish this proposal on SJS since this is a platform that comes quite close to what I envision in this proposal.

Introduction

Researchers currently rely on traditional journals for publishing their research. Why is this? you might ask. Is it because it is particularly difficult to publish research results? Perhaps 300 years ago, but certainly not today where anyone can publish anything on the Internet with very little trouble. Why do we keep publishing with them, then? – they charge outrageous amounts for their services in the form of APCs from authors or subscriptions from readers or their libraries. One of the real reasons, I believe, is prestige.

The purpose of publishing your work in a journal is not really to get your work published and read, but it is to prove that your paper was good enough to be published in that particular journal. The more prestigious the journal, the better the paper, it seems. This roughly boils down to using the impact factor of the journal to evaluate the research of authors publishing in it (bad idea, see for example Wrong Number: A closer look at Impact Factors). It is often mentioned in online discussions how researchers are typically evaluated by hiring committees or grant reviewers based which journals they have published in. In Denmark (and Norway – possibly other countries?), universities are even getting funded based on which journals their researchers publish in.

I think the journal’s reputation (impact factor) is used in current practice because it is easy. It is a number that a grant reviewer or hiring committee member can easily look up and use to assess an author without having to read piles of their papers on which they might have to be experts. I support a much more qualitative approach based on the individual works of the individual researcher. So, to have any hope of replacing this practice, I think we need to offer a quantitative “short-cut” that can compete with the impact factor (and H-index etc.) that say little about the actual quality of the researcher’s works. Sadly, a quantitative metric is likely what hiring committees and grant reviewers are going to be looking at. Here I think a (quantitative) “score” or several such scores on different aspects of a paper accompanying the (qualitative) review can be used to provide such an evaluation metric. Here I am going to present some ideas of how such a metric can be calculated and also some potential pitfalls we need to discuss how to handle.

I believe that a system to quantify various aspects of a paper’s quality as part of an open review process could help us turn to a practice of judging papers and their authors by the merits of the individual paper instead of by the journal in which they are published. I also believe that this can be designed to incentivise participation in such a system.

Research and researchers should be evaluated directly by the quality of the research instead of indirectly through the reputation of the journals they publish in. My hope is to base this evaluation on open peer review, i.e. the review comments are open for anyone to read along with the published paper. Even when a publisher (in the many possible incarnations of that word) chooses to use pre-publication peer review, I think that should be made open in the sense that the review comments should be open for all to read after paper acceptance. And in any case, I think it should be supplemented by post-publication peer review (both open in the sense that they are open to read and also open for anyone to comment – although one might opt for a restriction of reviewers to any researcher who has published something themselves as for example Science Open uses).

What do I mean by using peer review to replace journal reputation as a method of evaluation? This is where I envision calculating a “quality” or “reputation” metric as part of the review process. This metric would be established through a quality “score” (could be multiple scores targeting different aspects of the paper) assigned by the reviewers/commenters, but endorsed (or not) by other reviewers through a two-layer scoring system inspired by the reputation metric from StackExchange. This would, in my opinion, comprise a metric that:

  1. specifically evaluates the individual paper (and possibly the individual researcher through a combined score of her/his papers),
  2. is more than a superficial number – the number only accompanies a qualitative (expert) review of the individual paper that others can read to help them assess the paper,
  3. is completely transparent – accompanying reviews/comments are open for all to read and the votes/scores and the algorithm calculating a paper’s metric is completely open.

I have mentioned that this system is inspired by StackExchange. Let me first briefly explain what StackExchange is and how their reputation metric works: StackExchange is a question & answer (Q&A) site where anyone can post questions in different categories and anyone can post answers to those questions. The whole system is governed by a reputation metric which seems to be the currency that makes this platform work impressively well. Each question and each answer on the platform can be voted up or down by other users. When a user gets one of his/her questions or answers voted up, the user’s reputation metric increases. The score resulting from the voting helps rank questions and answers so the best ones are seen at the top of the list.

The System

A somewhat similar system could be used to evaluate scientific papers on a platform designed for the purpose. As I mentioned, my proposal is inspired by StackExchange, but I propose a somewhat different mechanism as the one based on questions and answers on StackExchange does not exactly fit the purpose here. I propose the following two-layer system.

  • First layer: each paper can be reviewed openly by other users on the platform. When someone reviews a paper, along with submission of the review text, the reviewer is asked to score the paper on one or more aspects. This could be simply “quality”, whatever this means, or several aspects such as “clarity”, “novelty”, “correctness”. It is of course an important matter to determine these evaluation aspects and define what they should mean. This is however a different story and I focus on the metric system here.
  • Second layer: other users on the platform can of course read the paper as well as the reviews attached to it. These users can score the individual reviews. This means that some users, even if they do not have the time to write a detailed review themselves, can still evaluate the paper by expressing whether they agree or disagree with the existing reviews of the paper.
  • What values can a score take? We will get to that in a bit.

How are metrics calculated based on this two-layer system?

  • Each paper’s metric is calculated as a weighted average of the scores assigned by reviewers (first layer). The weights assigned to the individual reviews are calculated from the scores other users have assigned to the reviews (second layer). The weight could be calculated in different ways depending on which values scores can take. It could be an average of the votes. It could also be calculated as the sum of votes on each review, meaning that reviews with lots of votes would generally get higher weights than reviews with few votes.
  • Each author’s metric is calculated based on the scores of the author’s papers. This could be done in several ways: One is a simple average; this would not take into account the number of papers an author has published. Maybe it should, so the sum of scores of the author’s papers could be another option. Alternatively, it might also be argued that each paper’s score in the author’s metric should be weighted by the “significance” of the paper which could be based on the number of reviews and votes on these each paper has.
  • Each reviewer’s metric is calculated based on the scores of her/his reviews in a similar way to the calculation of authors’ metrics. This should incentivise reviewers to write good reviews. Most users on the proposed platform will act as both reviewers and authors and will therefore have both a reviewer and an author metric.

Which Values Can Votes Have?

I propose to make the scores of both papers (first layer) and individual reviews (second layer) a  ± 1 vote. One could argue that this is a very coarse-grained scale, but consider the option of for example a 10-level scale. This could cause problems of different users interpreting the scale differently. Some users might hardly ever use the maximum score while other users might give the maximum score to all papers that they merely find worthy of publication. By relying on a simple binary score instead, an average over a (hopefully) high number of reviews and review endorsements/disapprovals would be less sensitive to individual interpretations of the score value than many-level scores.

Conclusion

As mentioned, I hope the proposed model of evaluating scientific publications by accompanying qualitative reviews by a quantitative score would provide a useful metric that – although still quantitative – could prove a more accurate measure of quality of individual publications for those that need to rely on such a measure. This proposal should not be considered a scientific article itself, but I hope it can be a useful contribution to a debate on how to make peer review both more open and more broadly useful to readers and evaluators of scientific publications.

I have chosen to publish this proposal on SJS since this is a platform that comes quite close to what I envision in this proposal. I hope that readers will take the opportunity to comment on the proposal and help start a discussion about it.

It’s all about replication

ReScience logoA new journal appeared recently in the scientific publishing landscape: ReScienceannounced at the recent EuroSciPy 2015 conference. The journal has been founded by Nicolas Rougier and Konrad Hinsen. This journal is remarkable in several ways, so remarkable in fact that I could not resist accepting their offer to become associate editor for the journal.

So how does this journal stand out from the crowd? First of all it is about as open as it gets. The entire publishing process is completely transparent – from first submission through review to final publication. Second, the journal platform is based entirely on GitHub, the code repository home to a plethora of open source projects. This is part of what enables the journal to be so open about the entire publishing process. Third, the journal does not actually publish original research – there are plenty of those already. Instead, ReScience focuses entirely on replications of already published computational science.

As has been mentioned by numerous people before me, when dealing with papers based on computational science it is not really enough to review the paper in the classical sense to ensure that the results can be trusted (this not only a problem of computational science, but this is the particular focus of ReScience). Results need to be replicated to validate them and this is what ReScience addresses.

Many of us probably know it: we are working on a new paper of our own and we need to replicate the results of some previous paper that we wish to compare our results against. Except for that comparison, this is essentially lost work after you get your paper published. Others looking at the original paper whose results you replicated may not be aware that anyone replicated these results. Now you can publish the replication of these previous results as well and get credit for it. At the same time you benefit the authors of the original results that you have replicated by helping validate their research.

The process of submitting your work to ReScience is described on their website along with the review process and the roles of editors and reviewers. So if you have replicated someone else’s computational work, go ahead and publish it in ReScience. If it is in the signal processing area I will be happy to take your submission through the publishing process.

Open Access Journals: What’s Missing?

I just came across this blog post by Nick Brown: Open Access journals: what’s not to like? This, maybe… That post was also what inspired the title of my post. His post really got me into writing mode, mostly because I don’t quite agree with him. I left this as a comment on ihs blog, but I felt it was worth repeating here.

Read the rest of this entry »

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog