Adventures in Signal Processing and Open Science

Category: Signal processing

Magni 1.6.0 released

Our newest version of the Magni software package was just released on the 2nd of November. This particular release has some interesting features we (the team behind the Magni package) hope some of you find particularly interesting.

The major new features in this release are approximate message passing (AMP) and generalised approximate message passing (GAMP) estimation algorithms for signal reconstruction. These new algorithms can be found in the magni.cs.reconstruction.amp and magni.cs.reconstruction.gamp modules, respectively. Note that the magni.cs sub-package contains algorithms applicable to compressed sensing (CS) and CS-like reconstruction problems in general – and not just atomic force microscopy (AFM).

If you are not familiar with the Magni package and are interested in compressed sensing and/or atomic force microscopy, we invite you to explore the functionality the package offers. It also contains various iterative thresholding reconstruction algorithms, dictionary and measurement matrices for 1D and 2D compressed sensing, various features for combining this with AFM imaging, and mechanisms for validating function input and storing meta-data to aid reproducibility.

The Magni package was designed and developed with a strong focus on well-tested, -validated and -documented code.

The Magni package is a product of the FastAFM research project.

Download

  • The package can be found on GitHub where we continually release new versions: GitHub – release 1.6.0 here.
  • The package documentation can be read here: Magni documentation
  • The package can be installed from PyPI or from Anaconda.
Advertisement

iTWIST’16 Keynote Speakers: Gerhard Wunder

iTWIST’16 is starting less than two weeks from now and we have 46 participants coming to Aalborg for the event (and I can still squeeze in a couple more – single-day registrations possible – so contact me if you are interested; only 4 places left before I have to order a bigger bus for the banquet dinner 🙂 ).

wunderOur next keynote speaker in line for the event is Gerhard Wunder, head of the Heisenberg Communications and Information Theory Group. Gerhard Wunder recently came to Freie UniversitÀt Berlin from Technische UniversitÀt Berlin. Dr. Wunder is currently heading two research projects: the EU FP7 project 5GNOW and PROPHYLAXE funded by the German Ministry of Education and Research and is a member of the management team of the EU H2020 FANTASTIC-5G project. Currently he receives funding in the German DFG priority programs SPP 1798 CoSIP (Compressed Sensing in Information Processing), and the upcoming SPP 1914 Cyber-Physical Networking.

Gerhard Wunder conducts research in wireless communication technologies and has recently started introducing principles of sparsity and compressed sensing into wireless communication. As an example of this, Gerhard Wunder recently published the paper “Sparse Signal Processing Concepts for Efficient 5G System Design” in IEEE Access together with Holger Boche, Thomas Strohmer, and Peter Jung.

At the coming iTWIST workshop, Gerhard Wunder is going to introduce us to the use of compressive sensing in random access medium access control (MAC), applied in massive machine-type communications – a major feature being extensively researched for coming 5G communication standards. The abstract of Dr. Wunder’s talk reads:

Compressive Coded Random Access for 5G Massive Machine-type Communication

Massive Machine-type Communication (MMC) within the Internet of Things (IoT) is an important future market segment in 5G, but not yet efficiently supported in cellular systems. Major challenge in MMC is the very unfavorable payload to control overhead relation due to small messages and oversized Medium Access (MAC) procedures. In this talk we follow up on a recent concept called Compressive Coded Random Access (CCRA) combining advanced MAC protocols with Compressed Sensing (CS) based multiuser detection. Specifically, we introduce a “one shot” random access procedure where users can send a message without a priori synchronizing with the network. In this procedure a common overloaded control channel is used to jointly detect sparse user activity and sparse channel profiles. In the same slot, data is detected based on the already available information. In the talk we show how CS algorithms and in particular the concept of hierarchical sparsity can be used to design efficient and scalable access protocols. The CCRA concept is introduced in full detail and further generalizations are discussed. We present algorithms and analysis that proves the additional benefit of the concept.

iTWIST’16 Keynote Speakers: Holger Rauhut

At this year’s international Travelling Workshop on Interactions between Sparse models and Technology (iTWIST) we have keynote speakers from several different scientific backgrounds. Our next speaker is a mathematician with a solid track record in compressed sensing and matrix/tensor completion: Holger Rauhut.

meHolger Rauhut is Professor for Mathematics and Head of Chair C for Mathematics (Analysis) at RWTH Aachen University. Professor Rauhut came to RWTH Aachen in 2013 from a position as Professor for Mathematics at the Hausdorff Center for Mathematics, University of Bonn since 2008.

Professor Rauhut has, among many other things, written the book A Mathematical Introduction to Compressive Sensing together with Simon Foucart and published important research contributions about structured random matrices.

At the coming iTWIST workshop I am looking very much forward to hearing Holger Rauhut speak about low-rank tensor recovery. This is especially interesting because, while the compressed sensing (one-dimensional) or the matrix completion (two-dimensional) problems are relatively straightforward to solve, things start getting much more complicated when you try to generalise it from ordinary vectors or matrices to higher-order tensors. Algorithms for the general higher-dimensional case seem to be much more elusive and I am sure that Holger Rauhut can enlighten us on this topic (joint work with Reinhold Schneider and Zeljka Stojanac):

Low rank tensor recovery

An extension of compressive sensing predicts that matrices of low rank can be recovered from incomplete linear information via efficient algorithms, for instance nuclear norm minimization. Low rank representations become much more efficient one passing from matrices to tensors of higher order and it is of interest to extend algorithms and theory to the recovery of low rank tensors from incomplete information. Unfortunately, many problems related to matrix decompositions become computationally hard and/or hard to analyze when passing to higher order tensors. This talk presents two approaches to low rank tensor recovery together with (partial) results. The first one extends iterative hard thresholding algorithms to the tensor case and gives a partial recovery result based on a variant of the restricted isometry property. The second one considers relaxations of the tensor nuclear norm (which itself is NP-hard to compute) and corresponding semidefinite optimization problems. These relaxations are based on so-called theta bodies, a concept from convex algebraic geometry. For both approaches numerical experiments are promising but a number of open problems remain.

iTWIST’16 Keynote Speakers: Florent Krzakala

Note: You can still register for iTWIST’16 until Monday the 1st of August!

Our next speaker at iTWIST’16 is Florent Krzakala. Much like Phil Schniter – the previous speaker presented here – Florent Krzakala has made important and enlightening contributions to the Approximate Message Passing family of algorithms.

floFlorent Krzakala is Professor of Physics at École Normale SupĂ©rieure in Paris, France. Professor Krzakala came to ENS in 2013 from a position as MaĂźtre de confĂ©rence in ESPCI, Paris (Laboratoire de Physico-chimie Theorique) since 2004. MaĂźtre de confĂ©rence is a particular French academic designation that I am afraid I am going to have to ask my French colleagues to explain to me 😉

Where Phil Schniter seems to have approached the (G)AMP algorithms, that have become quite popular for compressed sensing, from an estimation-algorithms-in-digital-communications-background, Florent Krzakala has approached the topic from a statistical physics background which seems to have brought a lot of interesting new insight to the table. For example, together with Marc MĂ©zard, Francois Sausset, Yifan Sun, and Lenka ZdeborovĂĄ he has shown how AMP algorithms are able to perform impressively well compared to the classic l1-minimization approach by using a special kind of so-called “seeded” measurement matrices in “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices“.

At this year’s iTWIST workshop in a few weeks, Professor Krzakala is going to speak about matrix factorisation problems and the approximate message passing framework. Specifically, we are going to hear about:

Approximate Message Passing and Low Rank Matrix Factorization Problems

A large amount of interesting problem in machine learning and statistics can be expressed as a low rank structured matrix factorization problem, such as sparse PCA, planted clique, sub-matrix localization, clustering of mixtures of Gaussians or community detection in a graph.

I will discuss how recent ideas from statistical physics and information theory have led, on the one hand, to new mathematical insights in these problems, leading to a characterization of the optimal possible performances, and on the other to the development of new powerful algorithms, called approximate message passing, which turns out to be optimal for a large set of problems and parameters.

iTWIST’16 Keynote Speakers: Phil Schniter

With only one week left to register for iTWIST’16, I am going to walk you through the rest of our keynote speakers this week.

Phil SchniterOur next speaker is Phil Schniter. Phil Schniter is Professsor in Electrical and Computer Engineering at Department of Electrical and Computer Engineering at Ohio State University, USA.

Professor Schniter joined the Department of Electrical and Computer Engineering at OSU after graduating with a PhD in Electrical Engineering from Cornell University in 2000. Phil Schniter also has industrial experience from Tektronix from 1993 to 1996 and has been a visiting professor at Eurecom (Sophia Antipolis, France) from October 2008 through February 2009, and at Supelec (Gif sur Yvette, France) from March 2009 through August 2009.

Professor Schniter has published an impressive selection of research papers; previously especially within digital communication. In recent years he has been very active in the research around generalised approximate message passing (GAMP). GAMP is an estimation framework that has become popular in compressed sensing / sparse estimation. The reasons for the success of this algorithm (family), as I see it, are that the algorithm estimates under-sampled sparse vectors with comparable accuracy to the classic l1-minimisation approach in compressed sensing and favourable computational complexity. At the same time, the framework is easily adaptable to many kinds of different signal distributions and other types of structure than plain sparsity. If you are dealing with a signal that is not distributed according to the Laplace distribution that the l1-minimisation approach implies, you can adapt GAMP to this other (known) distribution and achieve better reconstruction capabilities than the l1-minimisation. Even if you don’t know the distribution, GAMP can also be modified to estimate it automatically and quite efficiently. This and many other details are among Professor Schniter’s contributions to the research on GAMP.

At this year’s iTWIST, Phil Schniter will be describing recent work on robust variants of GAMP. In details, the abstract reads (and this is joint work with Alyson Fletcher and Sundeep Rangan):

Robust approximate message passing

Approximate message passing (AMP) has recently become popular for inference in linear and generalized linear models. AMP can be viewed as an approximation of loopy belief propagation that requires only two matrix multiplies and a (typically simple) denoising step per iteration, and relatively few iterations, making it computationally efficient. When the measurement matrix “A” is large and well modeled as i.i.d. sub-Gaussian, AMP’s behavior is closely predicted by a state evolution. Furthermore, when this state evolution has unique fixed points, the AMP estimates are Bayes optimal. For general measurement matrices, however, AMP may produce highly suboptimal estimates or not even converge. Thus, there has been great interest in making AMP robust to the choice of measurement matrix.

In this talk, we describe some recent progress on robust AMP. In particular, we describe a method based on an approximation of non-loopy expectation propagation that, like AMP, requires only two matrix multiplies and a simple denoising step per iteration. But unlike AMP, it leverages knowledge of the measurement matrix SVD to yield excellent performance over a larger class of measurement matrices. In particular, when the Gramian A’A is large and unitarily invariant, its behavior is closely predicted by a state evolution whose fixed points match the replica prediction. Moreover, convergence has been proven in certain cases, with empirical results showing robust convergence even with severely ill-conditioned matrices. Like AMP, this robust AMP can be successfully used with non-scalar denoisers to accomplish sophisticated inference tasks, such as simultaneously learning and exploiting i.i.d. signal priors, or leveraging black-box denoisers such as BM3D. We look forward to describing these preliminary results, as well as ongoing research, on robust AMP.

iTWIST’16 Keynote Speakers: Karin Schnass

Last week we heard about the first of our keynote speakers at this years’ iTWIST workshop in August – Lieven Vandenberghe.

Karin SchnassNext up on my list of speakers is Karin Schnass. Karin Schnass is an expert on dictionary learning and heading an FWF-START project on dictionary learning in the Applied Mathematics group in the Department of Mathematics at the University of Innsbruck.

Karin Schnass joined the University of Innsbruck in December 2014 as part of an Erwin Schrödinger Research Fellowship where she returned from a research position at University of Sassari, Italy, from 2012 to 2014. She originally graduated from University of Vienna, Austria, with a master in mathematics with distinction: “Gabor Multipliers – A Self-Contained Survey”. She graduated in 2009 with a PhD in computer, communication and information sciences from EPFL, Switzerland: “Sparsity & Dictionaries – Algorithms & Design”. Karin Schnass has, among other things, introduced the iterative thresholding and K-means (ITKM) algorithms for dictionary learning and published the first theoretical paper on dictionary learning (on arXiv) with RĂ©mi Gribonval.

At our workshop this August, I am looking forward to hearing Karin Schnass talk about Sparsity, Co-sparsity and Learning. In compressed sensing, the so-called synthesis model has been the prevailing model since the beginning. First, we have the measurements:

y = A x

From the measurements, we can reconstruct the sparse vector x by solving this convex optimisation problem:

minimize |x|_1 subject to |y - A x|_2 < Δ

If the vector x we can observe is not sparse, we can still do this if can find a sparse representation α of x in some dictionary D:

x = D α

where we take our measurements of x using some measurement matrix M:

y = M x = M D α = A α

and we reconstruct the sparse vector α as follows:

minimize |α|_1 subject to |y - M D α|_2 < Δ

The above is called the synthesis model because it works by using some sparse vector α to synthesize the vector x that we observe. There is an alternative to this model, called the analysis model, where we analyse an observed vector x to find some sparse representation ÎČ of it:

ÎČ = D' x

Here D’ is also a dictionary, but it is not the same dictionary as in the synthesis case. We can now reconstruct the vector x from the measurements y as follows:

minimize |D' x|_1 subject to |y - M x|_2 < Δ

Now if D is a (square) orthonormal matrix such as an IDFT, we can consider D’ a DFT matrix and they are simply each other’s inverse. In this case, the synthesis and analysis reconstruction problems above are equivalent. The interesting case is when the synthesis dictionary D is a so-called over-complete dictionary – a fat matrix. The analysis counterpart of this is a tall analysis dictionary D’ which behaves differently than the analysis dictionary.

Karin will give an overview over the synthesis and the analysis model and talk about how to learn dictionaries that are useful for either case. Specifically, she plans to tell us about (joint work with Michael Sandbichler):

While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions.

iTWIST’16 Keynote Speakers: Lieven Vandenberghe

The workshop program has been ready for some time now, and we are handling the final practicalities to be ready to welcome you in Aalborg in August for the iTWIST’16 workshop. So now I think it is time to start introducing you to our – IMO – pretty impressive line-up of keynote speakers.

Lieven VandenbergheFirst up is Prof. Lieven Vandenberghe from UCLA. Prof. Vandenberghe is an expert on convex optimisation and signal processing and is – among other things – well known for his fundamental textbook “Convex Optimization” together with Steven Boyd.

Lieven Vandenberghe is Professor in the Electrical Engineering Department at UCLA. He joined UCLA in 1997, following postdoctoral appointments at K.U. Leuven and Stanford University, and has held visiting professor positions at K.U. Leuven and the Technical University of Denmark. In addition to “Convex Optimization”, he also edited the “Handbook of Semidefinite Programming” with Henry Wolkowicz and Romesh Saigal.

At iTWIST, I am looking forward to hearing him speak about Semidefinite programming methods for continuous sparse optimization. So far, it is my impression that most theory and literature about compressed sensing and sparse methods has relied on discrete dictionaries consisting of a basis or frame of individual dictionary atoms. If we take the discrete Fourier transform (DFT) as an example, the dictionary has fixed atoms corresponding to a set of discrete frequencies. More recently, theories have started emerging that allow continuous dictionaries instead (see for example also the work of Ben Adcock, Anders Hansen, Bogdan Roman et al. as well). As far as I understand – a generalisation that in principle allows you to get rid of the discretised atoms and consider any atoms on the continuum “in between” as well. This is what Prof. Vandenberghe has planned for us so far (and this is joint work with Hsiao-Han Chao):

We discuss extensions of semidefinite programming methods for 1-norm minimization over infinite dictionaries of complex exponentials, which have recently been proposed for superresolution and gridless compressed sensing.

We show that results related to the generalized Kalman-Yakubovich-Popov lemma in linear system theory provide simple constructive proofs for the semidefinite representations of the penalties used in these problems. The connection leads to extensions to more general dictionaries associated with linear state-space models and matrix pencils.

The results will be illustrated with applications in spectral estimation, array signal processing, and numerical analysis.

iTWIST’16 is taking shape

This year’s international Traveling Workshop on Interactions Between Sparse Models and Technology is starting to take shape now. The workshop will take place on the 24th-26th of August 2016 in Aalborg. See also this recent post about the workshop.

1488401668_b968d06dff_b

By Alan Lam (CC-BY-ND)

Aalborg is a beautiful city in the northern part of Denmark and what many of you probably do not know is that Aalborg actually scored “Europe’s happiest city” in a recent survey by the European Commission.

It is now possible to register for the workshop and if you are quick and register before July, you get it all for only 200€. That is, three days of workshop, including lunches and a social event with dinner on Thursday evening.

There are plenty of good reasons to attend the workshop. In addition to the many exciting contributed talks and posters that we are now reviewing, we have an impressive line-up of 9 invited keynote speakers! I will be presenting what the speakers have in store for you here on this blog in the coming days.

 

international Traveling Workshop on Interactions between Sparse models and Technology

international Traveling Workshop on Interactions between Sparse models and Technology

On the 24th to 26th of August 2016, we are organising a workshop called international Traveling Workshop on Interactions between Sparse models and Technology (iTWIST). iTWIST is a biennial workshop organised by a cross-European committee of researchers and academics on theory and applications of sparse models in signal processing and related areas. The workshop has so far taken place in Marseille, France in 2012 and in Namur, Belgium in
2014.

I was very excited to learn last fall that the organising committee of the previous two instalments of the workshop had the confidence to let Morten Nielsen and me organise the workshop in Aalborg (Denmark) in 2016.

Themes

This year, the workshop continues many of the themes from the first two years and adds a few new:

  • Sparsity-driven data sensing and processing (e.g., optics, computer vision, genomics, biomedical, digital communication, channel estimation, astronomy)
  • Application of sparse models in non-convex/non-linear inverse problems (e.g., phase retrieval, blind deconvolution, self calibration)
  • Approximate probabilistic inference for sparse problems
  • Sparse machine learning and inference
  • “Blind” inverse problems and dictionary learning
  • Optimization for sparse modelling
  • Information theory, geometry and randomness
  • Sparsity? What’s next?
    • Discrete-valued signals
    • Union of low-dimensional spaces,
    • Cosparsity, mixed/group norm, model-based, low-complexity models, …
  • Matrix/manifold sensing/processing (graph, low-rank approximation, …)
  • Complexity/accuracy tradeoffs in numerical methods/optimization
  • Electronic/optical compressive sensors (hardware)

I would like to point out here, as Igor Carron mentioned recently, that HW designs are also very welcome at the workshop – it is not just theory and thought experiments. We are very interested in getting a good mix between theoretical aspects and applications of sparsity and related techniques.

Keynote Speakers

I am very excited to be able to present a range of IMO very impressive keynote speakers covering a wide range of themes:

  • Lieven Vandenberghe – University of California, Los Angeles – homepage
  • Gerhard Wunder – TU Berlin & Fraunhofer Institute – homepage
  • Holger Rauhut – RWTH Aachen – homepage
  • Petros Boufounos – Mitsubishi Electric Research Labs – homepage
  • Florent Krzakala and Eric Tramel – ENS Paris – homepage
  • Phil Schniter – Ohio State University – homepage
  • Karin Schnass – University of Innsbruck – homepage
  • Rachel Ward – University of Texas at Austinhomepage
  • Bogdan Roman – University of Cambridge – homepage

The rest of the workshop is open to contributions from the research community. Please send your papers (in the form of 2-page extended abstracts – see details here). Your research can be presented as an oral presentation or a poster. If you prefer, you can state your preference (paper or poster) during the submission process, but we cannot guarentee that we can honour your request and reserve the right to assign papers to either category in order to put together a coherent programme. Please note that we consider oral and poster presentations equally important – poster presentations will not be stowed away in a dusty corner during coffee breaks but will have one or more dedicated slots in the programme!

Open Science

In order to support open science, we strongly encourage authors to publish any code or data accompanying your paper in a publicly accessible repository, such as GitHub, Figshare, Zenodo etc.

The proceedings of the workshop will be published in arXiv as well as SJS in order to make the papers openly accessible and encourage post-publication discussion.

Compressed Sensing – and more – in Python

Compressed Sensing – and more – in Python

The availability of compressed sensing reconstruction algorithms for Python has so far been quite scarce. A new software package improves on this situation. The package PyUnLocBox from the LTS2 lab at EPFL is a convex optimisation toolbox using proximal splitting methods. It can, among other things, be used to solve the regularised version of the LASSO/BPDN optimisation problem used for reconstruction in compressed sensing:

\underset{x}{\mathrm{argmin}} \| Ax - y \|_2 + \tau \| x \|_1

See http://pyunlocbox.readthedocs.org/en/latest/tutorials/compressed_sensing_1.html

Heard through Pierre Vandergheynst.

I have yet to find out if it also solves the constrained version. Update: Pierre Vandergheynst informed me that the package does not yet solve the constrained version of the above optimisation problem, but it is coming:

\underset{x}{\mathrm{argmin}} \quad \| x \|_1 \\ \text{s.t.} \quad \| Ax - y \|_2 < \epsilon

Forest Vista

seeking principles

Academic Karma

Re-engineering Peer Review

Pandelis Perakakis

experience... learn... grow

PEER REVIEW WATCH

Peer-review is the gold standard of science. But an increasing number of retractions has made academics and journalists alike start questioning the peer-review process. This blog gets underneath the skin of peer-review and takes a look at the issues the process is facing today.

Short, Fat Matrices

a research blog by Dustin G. Mixon

www.rockyourpaper.org

Discover and manage research articles...

Science Publishing Laboratory

Experiments in scientific publishing

Open Access Button

Push Button. Get Research. Make Progress.

Le Petit Chercheur Illustré

Yet Another Signal Processing (and Applied Math) blog