Adventures in Signal Processing and Open Science

Month: July, 2016

iTWIST’16 Keynote Speakers: Holger Rauhut

At this year’s international Travelling Workshop on Interactions between Sparse models and Technology (iTWIST) we have keynote speakers from several different scientific backgrounds. Our next speaker is a mathematician with a solid track record in compressed sensing and matrix/tensor completion: Holger Rauhut.

meHolger Rauhut is Professor for Mathematics and Head of Chair C for Mathematics (Analysis) at RWTH Aachen University. Professor Rauhut came to RWTH Aachen in 2013 from a position as Professor for Mathematics at the Hausdorff Center for Mathematics, University of Bonn since 2008.

Professor Rauhut has, among many other things, written the book A Mathematical Introduction to Compressive Sensing together with Simon Foucart and published important research contributions about structured random matrices.

At the coming iTWIST workshop I am looking very much forward to hearing Holger Rauhut speak about low-rank tensor recovery. This is especially interesting because, while the compressed sensing (one-dimensional) or the matrix completion (two-dimensional) problems are relatively straightforward to solve, things start getting much more complicated when you try to generalise it from ordinary vectors or matrices to higher-order tensors. Algorithms for the general higher-dimensional case seem to be much more elusive and I am sure that Holger Rauhut can enlighten us on this topic (joint work with Reinhold Schneider and Zeljka Stojanac):

Low rank tensor recovery

An extension of compressive sensing predicts that matrices of low rank can be recovered from incomplete linear information via efficient algorithms, for instance nuclear norm minimization. Low rank representations become much more efficient one passing from matrices to tensors of higher order and it is of interest to extend algorithms and theory to the recovery of low rank tensors from incomplete information. Unfortunately, many problems related to matrix decompositions become computationally hard and/or hard to analyze when passing to higher order tensors. This talk presents two approaches to low rank tensor recovery together with (partial) results. The first one extends iterative hard thresholding algorithms to the tensor case and gives a partial recovery result based on a variant of the restricted isometry property. The second one considers relaxations of the tensor nuclear norm (which itself is NP-hard to compute) and corresponding semidefinite optimization problems. These relaxations are based on so-called theta bodies, a concept from convex algebraic geometry. For both approaches numerical experiments are promising but a number of open problems remain.

Advertisement

iTWIST’16 Keynote Speakers: Florent Krzakala

Note: You can still register for iTWIST’16 until Monday the 1st of August!

Our next speaker at iTWIST’16 is Florent Krzakala. Much like Phil Schniter – the previous speaker presented here – Florent Krzakala has made important and enlightening contributions to the Approximate Message Passing family of algorithms.

floFlorent Krzakala is Professor of Physics at École Normale Supérieure in Paris, France. Professor Krzakala came to ENS in 2013 from a position as Maître de conférence in ESPCI, Paris (Laboratoire de Physico-chimie Theorique) since 2004. Maître de conférence is a particular French academic designation that I am afraid I am going to have to ask my French colleagues to explain to me 😉

Where Phil Schniter seems to have approached the (G)AMP algorithms, that have become quite popular for compressed sensing, from an estimation-algorithms-in-digital-communications-background, Florent Krzakala has approached the topic from a statistical physics background which seems to have brought a lot of interesting new insight to the table. For example, together with Marc Mézard, Francois Sausset, Yifan Sun, and Lenka Zdeborová he has shown how AMP algorithms are able to perform impressively well compared to the classic l1-minimization approach by using a special kind of so-called “seeded” measurement matrices in “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices“.

At this year’s iTWIST workshop in a few weeks, Professor Krzakala is going to speak about matrix factorisation problems and the approximate message passing framework. Specifically, we are going to hear about:

Approximate Message Passing and Low Rank Matrix Factorization Problems

A large amount of interesting problem in machine learning and statistics can be expressed as a low rank structured matrix factorization problem, such as sparse PCA, planted clique, sub-matrix localization, clustering of mixtures of Gaussians or community detection in a graph.

I will discuss how recent ideas from statistical physics and information theory have led, on the one hand, to new mathematical insights in these problems, leading to a characterization of the optimal possible performances, and on the other to the development of new powerful algorithms, called approximate message passing, which turns out to be optimal for a large set of problems and parameters.

iTWIST’16 Keynote Speakers: Phil Schniter

With only one week left to register for iTWIST’16, I am going to walk you through the rest of our keynote speakers this week.

Phil SchniterOur next speaker is Phil Schniter. Phil Schniter is Professsor in Electrical and Computer Engineering at Department of Electrical and Computer Engineering at Ohio State University, USA.

Professor Schniter joined the Department of Electrical and Computer Engineering at OSU after graduating with a PhD in Electrical Engineering from Cornell University in 2000. Phil Schniter also has industrial experience from Tektronix from 1993 to 1996 and has been a visiting professor at Eurecom (Sophia Antipolis, France) from October 2008 through February 2009, and at Supelec (Gif sur Yvette, France) from March 2009 through August 2009.

Professor Schniter has published an impressive selection of research papers; previously especially within digital communication. In recent years he has been very active in the research around generalised approximate message passing (GAMP). GAMP is an estimation framework that has become popular in compressed sensing / sparse estimation. The reasons for the success of this algorithm (family), as I see it, are that the algorithm estimates under-sampled sparse vectors with comparable accuracy to the classic l1-minimisation approach in compressed sensing and favourable computational complexity. At the same time, the framework is easily adaptable to many kinds of different signal distributions and other types of structure than plain sparsity. If you are dealing with a signal that is not distributed according to the Laplace distribution that the l1-minimisation approach implies, you can adapt GAMP to this other (known) distribution and achieve better reconstruction capabilities than the l1-minimisation. Even if you don’t know the distribution, GAMP can also be modified to estimate it automatically and quite efficiently. This and many other details are among Professor Schniter’s contributions to the research on GAMP.

At this year’s iTWIST, Phil Schniter will be describing recent work on robust variants of GAMP. In details, the abstract reads (and this is joint work with Alyson Fletcher and Sundeep Rangan):

Robust approximate message passing

Approximate message passing (AMP) has recently become popular for inference in linear and generalized linear models. AMP can be viewed as an approximation of loopy belief propagation that requires only two matrix multiplies and a (typically simple) denoising step per iteration, and relatively few iterations, making it computationally efficient. When the measurement matrix “A” is large and well modeled as i.i.d. sub-Gaussian, AMP’s behavior is closely predicted by a state evolution. Furthermore, when this state evolution has unique fixed points, the AMP estimates are Bayes optimal. For general measurement matrices, however, AMP may produce highly suboptimal estimates or not even converge. Thus, there has been great interest in making AMP robust to the choice of measurement matrix.

In this talk, we describe some recent progress on robust AMP. In particular, we describe a method based on an approximation of non-loopy expectation propagation that, like AMP, requires only two matrix multiplies and a simple denoising step per iteration. But unlike AMP, it leverages knowledge of the measurement matrix SVD to yield excellent performance over a larger class of measurement matrices. In particular, when the Gramian A’A is large and unitarily invariant, its behavior is closely predicted by a state evolution whose fixed points match the replica prediction. Moreover, convergence has been proven in certain cases, with empirical results showing robust convergence even with severely ill-conditioned matrices. Like AMP, this robust AMP can be successfully used with non-scalar denoisers to accomplish sophisticated inference tasks, such as simultaneously learning and exploiting i.i.d. signal priors, or leveraging black-box denoisers such as BM3D. We look forward to describing these preliminary results, as well as ongoing research, on robust AMP.

iTWIST’16 Keynote Speakers: Karin Schnass

Last week we heard about the first of our keynote speakers at this years’ iTWIST workshop in August – Lieven Vandenberghe.

Karin SchnassNext up on my list of speakers is Karin Schnass. Karin Schnass is an expert on dictionary learning and heading an FWF-START project on dictionary learning in the Applied Mathematics group in the Department of Mathematics at the University of Innsbruck.

Karin Schnass joined the University of Innsbruck in December 2014 as part of an Erwin Schrödinger Research Fellowship where she returned from a research position at University of Sassari, Italy, from 2012 to 2014. She originally graduated from University of Vienna, Austria, with a master in mathematics with distinction: “Gabor Multipliers – A Self-Contained Survey”. She graduated in 2009 with a PhD in computer, communication and information sciences from EPFL, Switzerland: “Sparsity & Dictionaries – Algorithms & Design”. Karin Schnass has, among other things, introduced the iterative thresholding and K-means (ITKM) algorithms for dictionary learning and published the first theoretical paper on dictionary learning (on arXiv) with Rémi Gribonval.

At our workshop this August, I am looking forward to hearing Karin Schnass talk about Sparsity, Co-sparsity and Learning. In compressed sensing, the so-called synthesis model has been the prevailing model since the beginning. First, we have the measurements:

y = A x

From the measurements, we can reconstruct the sparse vector x by solving this convex optimisation problem:

minimize |x|_1 subject to |y - A x|_2 < ε

If the vector x we can observe is not sparse, we can still do this if can find a sparse representation α of x in some dictionary D:

x = D α

where we take our measurements of x using some measurement matrix M:

y = M x = M D α = A α

and we reconstruct the sparse vector α as follows:

minimize |α|_1 subject to |y - M D α|_2 < ε

The above is called the synthesis model because it works by using some sparse vector α to synthesize the vector x that we observe. There is an alternative to this model, called the analysis model, where we analyse an observed vector x to find some sparse representation β of it:

β = D' x

Here D’ is also a dictionary, but it is not the same dictionary as in the synthesis case. We can now reconstruct the vector x from the measurements y as follows:

minimize |D' x|_1 subject to |y - M x|_2 < ε

Now if D is a (square) orthonormal matrix such as an IDFT, we can consider D’ a DFT matrix and they are simply each other’s inverse. In this case, the synthesis and analysis reconstruction problems above are equivalent. The interesting case is when the synthesis dictionary D is a so-called over-complete dictionary – a fat matrix. The analysis counterpart of this is a tall analysis dictionary D’ which behaves differently than the analysis dictionary.

Karin will give an overview over the synthesis and the analysis model and talk about how to learn dictionaries that are useful for either case. Specifically, she plans to tell us about (joint work with Michael Sandbichler):

While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions.

My problem with ResearchGate and Academia.edu

TLDR; you can find my publications in Aalborg University’s repository or ORCID.

ResearchGate – wow, a social network for scientists and researchers you might think. But think again about the ‘wow’. At least I am not so impressed. Here’s why…

I once created a profile on ResearchGate out of curiosity. It initially seemed like a good idea, but I soon realised that this would just add to the list of profile pages I would have to update, sigh. But so far I have kept my profile for fear of missing out. What if others cannot find my publications if I am not on ResearchGate? And so on…

But updating my profile is just the tip of the iceberg. What I find far more problematic about the site is their keen attempts to create a walled garden community. Let me explain what I mean. Take this paper for example (this is not a critique of this paper – in fact I think this is an example of a very interesting paper): One-Bit Compressive Sensing of Dictionary-Sparse Signals by Rich Baraniuk, Simon Foucart, Deanna Needell, Yaniv Plan, and Mary Wootters:

  1. First of all, when you click the link to the paper above you cannot even see it without logging in on ResearchGate.
    “What’s the problem?”, you might think. “ResearchGate is free – just create an account and log in”. But I would argue that open access is not open access if you have to register and log in to read the paper – even if it is free.
  2. Once you log in and can finally see the paper, it turns out that you cannot read the actual paper. This appears to be because the author has not uploaded the full text and ResearchGate displays a button where you can “Request full-text” to ask the author to provide it.
    “Now what?!”, you are thinking. “This is a great service to both readers and authors, making it easy to connect authors to their readers and enabling them to easily give the readers what they are looking for” – wrong! This is a hoax set up up by ResearchGate to convince readers that they are a great benevolent provider of open access literature.

The problem is that the paper is already accessible here: on arXiv – where it should be. ResearchGate has just scraped the paper info from arXiv and are trying to persuade the author to upload it to ResearchGate as well to make it look like ResearchGate is the place to go to read this paper. They could have chosen to simply link to the paper on arXiv, making it easy for readers to find it straight away. But they will not do that, because they want readers to stay inside their walled garden, controlling the information flow to create a false impression that ResearchGate is the only solution.

As if this was not enough, there are yet other reasons to reconsider your membership. For example, they are contributing to journal impact factor abuse-like metric obsession with their ResearchGate score. The problem is that this score is not transparent and not reproducible contributing only to an obsession with numbers driving “shiny” research and encouraging gaming of metrics.

I don’t know about you, but I have had enough – I quit!

…The clever reader has checked and noticed that I have not deleted my ResearchGate profile. Why? Am I just another hypocrite? Look closer – you will notice that the only publication on my profile is a note explaining why I do not wish to use ResearchGate. I think it is better to actively inform about my choice and attack the problem from the inside rather than just staying silently away.

Update 6th of July 2016…

I have now had a closer look at Academia.edu as well and it turns out that they are doing more or less the same, so I have decided to quit this network as well. They do not let you read papers without logging in and they also seem to have papers obviously scraped from arXiv, waiting for the author to upload the full-text version and ignoring the fact that it is available on arXiv. Again, they want to gather everything in-house to make it appear as if they are the rightful gate-keepers of all research.

As I did on ResearchGate as well, I have left my profile on Academia.edu with just a single publication which is basically this blog post (and a publication which is a link to my publications on Aalborg University’s repository.

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog