magni.cs.reconstruction.gamp._algorithm module

Module providing the core Generalised Approximate Message Passing (GAMP) algorithm.

Routine listings

run(y, A, A_asq=None)
Run the GAMP reconstruction algorithm.

See also

magni.cs.reconstruction.gamp._config
Configuration options.
magni.cs.reconstruction.gamp.input_channel
Available input channels.
magni.cs.reconstruction.gamp.output_channel
Available output channels.
magni.cs.reconstruction.gamp.stop_criterion
Available stop critria.

Notes

The default configuration of the GAMP algorithm provides the s-GB AMP algorithm from [1] using an MSE convergence based stop criterion. Both the input channel, the output channel, and the stop criterion may be changed.

This implementation allows for the use of sum approximations of the squared system matrix as detailed in [1] and [2]. Furthermore, a simple damping option is available based on the description in [3] (see also [4] for more details on damping in GAMP).

References

[1](1, 2) F. Krzakala, M. Mezard, F. Sausset, Y. Sun, and L. Zdeborova, “Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices”, Journal of Statistical Mechanics: Theory and Experiment, vol. P08009, pp. 1-57, Aug. 2012.
[2]S. Rangan, “Generalized Approximate Message Passing for Estimation with Random Linear Mixing”, arXiv:1010.5141v2, pp. 1-22, Aug. 2012.
[3]S. Rangan, P. Schniter, and A. Fletcher. “On the Convergence of Approximate Message Passing with Arbitrary Matrices”, in IEEE International Symposium on Information Theory (ISIT), pp. 236-240, Honolulu, Hawaii, USA, Jun. 29 - Jul. 4, 2014.
[4]J. Vila, P. Schniter, S. Rangan, F. Krzakala, L. Zdeborova, “Adaptive Damping and Mean Removal for the Generalized Approximate Message Passing Algorithm”, in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), South Brisbane, Queensland, Australia, Apr. 19-24, 2015, pp. 2021-2025.
magni.cs.reconstruction.gamp._algorithm.run(y, A, A_asq=None)[source]

Run the GAMP reconstruction algorithm.

Parameters:
  • y (ndarray) – The m x 1 measurement vector.
  • A (ndarray or magni.utils.matrices.{Matrix, MatrixCollection}) – The m x n matrix which is the product of the measurement matrix and the dictionary matrix.
  • A_asq (ndarray or magni.utils.matrices.{Matrix, MatrixCollection} or None) – The m x n matrix which is the entrywise absolute value squared product of the measurement matrix and the dictionary matrix (the default is None, which implies that a sum approximation is used).
Returns:

  • alpha (ndarray) – The n x 1 reconstructed coefficient vector.
  • history (dict, optional) – The dictionary of various measures tracked in the GAMP iterations.

Notes

Optionally, the algorithm may be configured to save and return the iteration history along with the reconstruction result. The returned history contains the following:

  • alpha_bar : Mean coefficient estimates (the reconstruction coefficients).
  • alpha_tilde : Variance coefficient estimates.
  • MSE : solution Mean squared error (if the true solution is known).
  • input_channel_parameters : The state of the input channel.
  • output_channel_parameters : The state of the output channel.
  • stop_criterion : The currently used stop criterion.
  • stop_criterion_value : The value of the stop criterion.
  • stop_iteration : The iteration at which the algorithm stopped.
  • stop_reason : The reason for termination of the algorithm.

Examples

For example, recovering a vector from AWGN noisy measurements using GAMP

>>> import numpy as np
>>> from magni.cs.reconstruction.gamp import run, config
>>> np.random.seed(seed=6028)
>>> k, m, n = 10, 200, 400
>>> tau = float(k) / n
>>> A = 1 / np.sqrt(m) * np.random.randn(m, n)
>>> A_asq = np.abs(A)**2
>>> alpha = np.zeros((n, 1))
>>> alpha[:k] = np.random.normal(scale=2, size=(k, 1))
>>> np_printoptions = np.get_printoptions()
>>> np.set_printoptions(suppress=True, threshold=k+2)
>>> alpha[:k + 2]
array([[ 1.92709461],
       [ 0.74378508],
       [-3.2418159 ],
       [-1.32277347],
       [ 0.90118   ],
       [-0.19157262],
       [ 0.82855712],
       [ 0.24817994],
       [-1.43034777],
       [-0.21232344],
       [ 0.        ],
       [ 0.        ]])
>>> sigma = 0.15
>>> y = A.dot(alpha) + np.random.normal(scale=sigma, size=(A.shape[0], 1))
>>> input_channel_params = {'tau': tau, 'theta_bar': 0, 'theta_tilde': 4,
... 'use_em': False}
>>> config['input_channel_parameters'] = input_channel_params
>>> output_channel_params = {'sigma_sq': sigma**2,
... 'noise_level_estimation': 'fixed'}
>>> config['output_channel_parameters'] = output_channel_params
>>> alpha_hat = run(y, A, A_asq)
>>> alpha_hat[:k + 2]
array([[ 1.93810961],
       [ 0.6955502 ],
       [-3.39759349],
       [-1.35533562],
       [ 1.10524227],
       [-0.00594848],
       [ 0.79274671],
       [ 0.04895264],
       [-1.08726071],
       [-0.00142911],
       [ 0.00022861],
       [-0.00004272]])
>>> np.sum(np.abs(alpha - alpha_hat) > sigma * 3)
0

or recover the same vector returning a history comparing the pr. iteration solution to the true vector and printing the A_asq details

>>> config['report_A_asq_setup'] = True
>>> config['report_history'] = True
>>> config['true_solution'] = alpha
>>> alpha_hat, history = run(y, A, A_asq) 
GAMP is using the A_asq: [[ 0.024 ..., 0.002]
 ...,
 [ 0. ..., 0.014]]
The sum approximation method is: None
>>> alpha_hat[:k + 2]
array([[ 1.93810961],
       [ 0.6955502 ],
       [-3.39759349],
       [-1.35533562],
       [ 1.10524227],
       [-0.00594848],
       [ 0.79274671],
       [ 0.04895264],
       [-1.08726071],
       [-0.00142911],
       [ 0.00022861],
       [-0.00004272]])
>>> np.array(history['MSE']).reshape(-1, 1)[1:11]
array([[ 0.04562729],
       [ 0.01328304],
       [ 0.00112098],
       [ 0.00074968],
       [ 0.00080175],
       [ 0.00076615],
       [ 0.00077043]])

or recover the same vector using sample variance AWGN noise level estimation

>>> config['report_A_asq_setup'] = False
>>> config['report_history'] = False
>>> output_channel_params['noise_level_estimation'] = 'sample_variance'
>>> config['output_channel_parameters'] = output_channel_params
>>> alpha_hat = run(y, A, A_asq)
>>> alpha_hat[:k + 2]
array([[ 1.94820622],
       [ 0.72162206],
       [-3.39978431],
       [-1.35357001],
       [ 1.10701779],
       [-0.00834467],
       [ 0.79790879],
       [ 0.08441384],
       [-1.08946306],
       [-0.0015894 ],
       [ 0.00020561],
       [-0.00003623]])
>>> np.sum(np.abs(alpha - alpha_hat) > sigma * 3)
0

or recover the same vector using median AWGN noise level estimation

>>> output_channel_params['noise_level_estimation'] = 'median'
>>> config['output_channel_parameters'] = output_channel_params
>>> alpha_hat = run(y, A, A_asq)
>>> alpha_hat[:k + 2]
array([[ 1.93356483],
       [ 0.65232347],
       [-3.39440429],
       [-1.35437724],
       [ 1.10312573],
       [-0.0050555 ],
       [ 0.78743162],
       [ 0.03616397],
       [-1.08589927],
       [-0.00136802],
       [ 0.00024121],
       [-0.00004498]])
>>> np.sum(np.abs(alpha - alpha_hat) > sigma * 3)
0

or recover the same vector learning the AWGN noise level using expectation maximization (EM)

>>> output_channel_params['noise_level_estimation'] = 'em'
>>> config['output_channel_parameters'] = output_channel_params
>>> alpha_hat = run(y, A, A_asq)
>>> alpha_hat[:k + 2]
array([[ 1.94118089],
       [ 0.71553983],
       [-3.40076165],
       [-1.35662005],
       [ 1.1099417 ],
       [-0.00688125],
       [ 0.79442879],
       [ 0.06258856],
       [-1.08792606],
       [-0.00148811],
       [ 0.00022266],
       [-0.00003785]])
>>> np.sum(np.abs(alpha - alpha_hat) > sigma * 3)
0
>>> np.set_printoptions(**np_printoptions)
magni.cs.reconstruction.gamp._algorithm._clean_vars(vars_dict)[source]

Return a clean-up vars dict.

All private variables are removed and magni configgers are converted to dictionaries.

Parameters:vars_dict (dict) – The dictionary of variables to clean.
Returns:cleaned_vars (dict) – The cleaned dictionary of variables.
magni.cs.reconstruction.gamp._algorithm._get_gamp_initialisation(y, A, A_asq)[source]

Return an initialisation of the GAMP algorithm.

Parameters:
  • y (ndarray) – The m x 1 measurement vector.
  • A (ndarray or magni.utils.matrices.{Matrix, MatrixCollection}) – The m x n matrix which is the product of the measurement matrix and the dictionary matrix.
  • A_asq (ndarray or magni.utils.matrices.{Matrix, MatrixCollection} or None) – The m x n matrix which is the entrywise absolute value squared product of the measurement matrix and the dictionary matrix (the default is None, which implies that a sum approximation is used).
Returns:

init (dict) – The initialisation of the the GAMP algorithm.