1 Introduction
A collection of recent efforts surveyed by [24] consider the problem of using training data to solve inverse problems in imaging. Specifically, imagine we observe a corrupted set of measurements of an image according to a measurement operator with some noise according to
(1) 
Our task is to compute an estimate of
given measurements and knowledge of . This task is particularly challenging when the inverse problem is illposed, i.e., when the system is underdetermined or illconditioned, in which case simple methods such as least squares estimation (i.e., ) may not exist or may produce estimates that are highly sensitive to noise.Decades of research has explored geometric models of image structure that can be used to regularize solutions to this inverse problem, including [32, 29, 10] and many others. More recent efforts have focused instead on using large collections of training images, , to learn effective regularizers.
One particularly popular and effective approach involves augmenting standard iterative inverse problem solvers with learned deep networks. This approach, which we refer to as deep unrolling (DU), is reviewed in Section 2.1
. The basic idea is to build an architecture that mimics a small number of steps in of an iterative procedure. In practice, the number of steps is quite small (typically 510) because of issues stability, memory, and numerical issues arising in backpropagation. This paper sidesteps this key limitation of deep unrolling methods with a novel approach based on
deep equilibrium models (DEMs) [4], which are designed for training arbitrarily deep networks. The result is a novel approach to training networks to solve inverse problems in imaging that yields up to a 4dB improvement in performance above stateoftheart alternatives and where the computational budget can be selected at test time to optimize contextdependent tradeoffs between accuracy and computation. The key empirical findings, which are detailed in Section 6.4, are illustrated in Fig. 1(a).1.1 Contributions
This paper presents a fundamentally new approach to machinelearning based methods for solving linear inverse problems in imaging. Unlike most stateoftheart methods, which are based on unrolling a small number of iterations of an iterative reconstruction scheme (“deep unrolling”), our method is based on deep equilibrium models that correspond to a potentially infinite number of iterations. This framework yields more accurate reconstructions that the current stateoftheart across a range of inverse problems and gives users the ability to navigate a tradeoff between reconstruction computation time and accuracy at test time; specifically, we observe up to a 4dB improvement in PSNR. Furthermore, because our formulation is based on finding fixed points, we can use standard accelerated fixed point methods to speed test time computations – something that is not possible with deep unrolling methods. In addition, our approach inherits provable convergence guarantees depending on the “base” algorithm used to select a fixed point equation for the deep equilibrium framework. Experimental results also show that our proposed initialization based on pretraining is superior than random initialization, and the proposed approach is more robust to noise than past methods. Overall, the proposed approach is a unique bridge between conventional fixedpoint methods in numerical analysis and deep learning and optimizationbased solvers for inverse problems.
2 Relationship to Prior Work
2.1 Review of Deep Unrolling Methods
Deep Unrolling methods describe approaches to solving inverse problems which consist of a fixed number of architecturally identical “blocks,” which are often inspired by a particular optimization algorithm. These methods represent the current state of the art in MRI reconstruction, with most top submissions to the fastMRI challenge [23] being some sort of unrolled net. Unrolled networks have seen success in other imaging tasks, e.g. lowdose CT [36], lightfield photography [9], and emission tomography [21].
We describe here a specific variant of deep unrolling methods based on gradient descent, although many variants exist based on alternative optimization or fixed point iteration schemes [24]. Suppose we have a known regularization function that could be applied to an image ; e.g. in Tikhonov regularization, for some scalar . Then we could compute an image estimate by solving the optimization problem
(2) 
If is differentiable, this can be accomplished via gradient descent. That is, we start with an initial estimate such as and choose a step size , such that for iteration , we set
where is the gradient of the regularizer.
The basic idea behind deep unrolled methods is to fix some number of iterations (typically ranges from 5 to 10), declare that will be our estimate , and model with a neural network, denoted , that can be learned with training data. We assume that all have identical weights, although other works also explore nonweighttied variants [1]. For example, we may define the unrolled gradient descent estimate to be where and for
(3) 
Training attempts to minimize the cost function with respect to the network parameters . This form of training is often called “endtoend”; that is, we do not train the network representing in isolation, but rather on the quality of the resulting estimate , which depends on the forward model .
The number of iterations is kept small for two reasons. First, at deployment, these systems are optimized to compute image estimates quickly – a desirable property we wish to retain in developing new methods. Second, it is challenging to train deep unrolled networks for many iterations due to memory limitations of GPUs because the memory required to calculate the backpropagation updates scales linearly with the number of unrolled iterations.
As a workaround, consider training such systems for a small number of iterations (e.g., ), then extracting the learned regularizer gradient , and using it within a gradient descent algorithm until convergence (i.e. for more iterations than used in training). Our numerical results highlight how poorly this method performs in practice (Section 6.4). Choosing the number of iterations (and hence the test time computational budget) at training time is essential. As we illustrate in Fig. 1(b), one cannot deviate from this choice after training and expect good performance.
2.2 Review of Deep Equilibrium Models
In [4], the authors propose a method for training arbitrarilydeep networks defined by repeated application of a single layer. Imagine an layer network with input and weights . Letting denote the output of the hidden layer, we may write
where is the layer index and
is a nonlinear transformation such as inner products followed by the application of a nonlinear activation function. Recent prior work explored forcing this transformation at each layer to be the same (i.e.
weight tying), so that for all and showed that such networks still yield competitive performance [11, 3]. Under weight tying, we have the recursion(4) 
and the output as is a fixed point of the operator . [4] show this fixed point can be computed without explicitly building an infinitelydeep network, and that the network weights can be learned using implicit differentiation and constant memory, bypassing computation and numerical stability issues associated with related techniques on largescale problems [8, 15]. This past work focused on sequence models and timeseries tasks, assuming that each was a single layer of a neural network, and did not explore the image reconstruction task that is the focus of this paper.
2.3 Relationship to PlugandPlay and Regularization by Denoising Methods
Initiated by [34], a collection of methods based on the plugandplay (PnP) framework have been proposed, allowing denoising algorithms to be used as priors for modelbased image reconstruction. Given an inverse problem setting, one writes the reconstructed image as the solution to an optimization problem in which the objective function is the sum of a datafit term and a regularization term. Applying alternating directions method of multipliers (ADMM, [6, 7]) to this optimization problem, we arrive at a collection of update equations, one of which has the form
where is the regularization function; this optimization problem in this update equation can be considered as “denoising” the image . PnP methods replace this explicit optimization step with a “pluggedin” denoising method. Notably, some stateoftheart denoisers (e.g., BM3D [10] and Unets [28]) do not have an explicit associated with them, but nevertheless empirically work well within the PnP framework. [27] propose Regularization by Denoising (RED) based on a similar philosophy, but consider a regularizer of the form
where corresponds to an image denoising function.
Recent PnP and RED efforts focuses on using training data to learn denoisers [22, 30, 38, 33, 17]. In contrast to the unrolling methods described in Section 2.1, these methods are not trained endtoend; rather, the denoising module is trained independent of the inverse problem (i.e. ) at hand. As described by [24], decoupling the training of the learned component from results in a reconstruction system that is flexible and does not need to be retrained for each new , but can require substantially more training samples to achieve the reconstruction accuracy of a method trained endtoend for a specific .
3 Proposed Approach
Our approach is to choose a function so that a fixed point of the recursion in (4) is a good estimate of for a given . To the best of our knowledge, this is the first example of the application of DEMs to image reconstruction tasks. We describe choices of (and hence of the implicit infinitedepth neural network architecture) that explicitly account for the forward model and generally for the inverse problem at hand. Specifically, we propose choosing based on the ideas of deep unrolling (Section 2.1) extended to an infinite number of iterations – a paradigm that has been beyond the reach of all previous deep unrolling methods.
Let denote the image estimate after rounds of an iterative algorithm (as in Section 2.1) and denote the observation from (1) under forward operator . We consider three specific choices of below, but note that many other options are possible.
Deep equilibrium gradient descent (DEGrad):
Deep equilibrium proximal gradient (DEProx):
Proximal gradient methods [25] use a proximal operator associated with a function :
(6) 
and use this to solve the optimization problem in (2) via the iterates
where is a step size. Inspired by this optimization framework, we choose in the DEM framework as
(7) 
Following [19], we replace with a trainable network , leading to the fixedpoint iterations:
(8) 
Deep equilibrium alternating direction method of multipliers (DeAdmm):
The Alternating Direction Method of Multipliers (ADMM, [6]) reformulates the optimization problem (2) as
The augmented Lagrangian (in its “scaled form” – see [6]) associated with this problem is given by
where is an additional auxiliary variable and is a userdefined parameter. The ADMM iterates are then
(9) 
Here the  and updates simplify as
As in the DEProx approach, can be replaced with a learned network, denoted . Making this replacement, and substituting directly into the expressions for and gives:
(10) 
Note that the updates for and depend only on the previous iterates and . Therefore, the above updates can be interpreted as fixedpoint iterations on the joint variable , where the iteration map is implicitly defined as the map that satisfies
(11) 
Here we take the estimated image to be , where is the fixedpoint of , i.e., the limit of as .
4 Calculating forward passes and gradient updates
Given a choice of , we confront the following obstacles. (1) Forward calculation: given an observation and weights , we need to compute the fixed point of efficiently. (2) Training: given a collection of training samples , we need to find the optimal .
4.1 Calculating FixedPoints
Both training and inference in a DEM require calculating a fixed point of the iteration map given some initial point . The most straightforward approach is to use fixedpoint iterations given in (4). Convergence of this scheme for specific designs is discussed in Section 5.
However, fixedpoint iterations may not converge quickly. By viewing unrolled deep networks as fixedpoint iterations, we inherit the ability to accelerate inference with standard fixedpoint accelerators. To our knowledge, this work is the first time iterative inversion methods incorporating deep networks have been accelerated using fixedpoint accelerators.
Anderson Acceleration:
Anderson acceleration [35]^{2}^{2}2Anderson acceleration for Deep Equilibrium models was introduced in a NeurIPS tutorial by [16].
utilizes past iterates to identify promising directions to move during the iterations. This takes the form of identifying a vector
and settingWe find by solving the optimization problem:
(12) 
with a matrix with columns, where the column is the (vectorized) residual . The optimization problem in (12) admits a leastsquares solution, adding negligible computational overhead when is small (e.g., ).
An important practical consideration is that accelerating fixedpoint iterations arising from optimization algorithms with auxiliary variables (like ADMM) is nontrivial. In these cases, standard fixedpoint iterations may be preferred for their simplicity of implementation. This is the approach we take in finding fixedpoints of our proposed DEADMM model.
4.2 Gradient Calculation
In this section, we provide a brief overview of the training procedure used to train all networks in Section 6.4
. We use stochastic gradient descent to find network parameters
that (locally) minimize a cost function of the form whereis a given loss function,
is the th training image with paired measurements , and denotes the reconstructed image given as the fixedpoint of . For our image reconstruction experiments, we use the meansquared error (MSE) loss:(13) 
To simplify the calculations below, we consider gradients of the cost function with respect to a single training measurement/image pair, which we denote . Following [4], we leverage the fact that is a fixedpoint of to find the gradient of the loss with respect to the network parameters without backpropagating through an arbitrarilylarge number of fixedpoint iterations. We summarize this approach below.
Since we assume is the MSE loss, the gradient is simply the residual between and the equilibrium point: .
In order to compute we start with the fixed point equation: . Implicitly differentiating and rearranging this equation with respect to gives
(15) 
Plugging this expression into (14) gives
This converts the memoryintensive task of backpropagating through many iterations of to the problem of calculating an inverse Jacobianvector product. To approximate the inverse Jacobianvector product, first we define the vector by
Following [16], we note that is a fixed point of the equation
(16) 
and the same machinery used to calculate the fixed point may be used to calculate . For analysis purposes, we note if , simple fixedpoint iterations (16) may be represented by the Neumann series:
(17) 
Convergence of the above Neumann series is discussed in Section 5.
Conventional autodifferentiation tools permit quickly computing the vectorJacobian products in (16) and (17). Once an accurate approximation to is calculated, the gradient in (14) is given by
(18) 
The gradient calculation process is summarized in the following steps, assuming a fixed point of is known:

[topsep=2ex,itemsep=1ex,partopsep=1ex,parsep=1ex,leftmargin=4ex]

Compute the residual .

Compute an approximate fixedpoint of the equation .

Compute .
5 Convergence Theory
Here we study convergence of the proposed deep equilibrium models to a fixedpoint at inference time, i.e., given the iteration map we give conditions that guarantee the convergence of to a fixedpoint as .
Classical fixedpoint theory ensures that the iterates converge to a unique fixedpoint if the iteration map is contractive, i.e., if there exists a constant such that . Below we give conditions on the learned component (replacing the gradient or proximal mapping of a regularizer) used in the DEGrad, DEProx and DEADMM models that that ensure the resulting iteration map is contractive and thus the fixedpoint iterations for these models converge.
In particular, following [30], we assume that the learned component satisfies the following condition: there exists an such that for all we have
(19) 
where . In other words, we assume the map is Lipschitz.
If we interpret as a denoising or deartifacting network, then is the map that outputs the noise or artifacts present in a degraded image. In practice, often is implemented with a residual “skipconnection”, such that , where is, e.g., a deep Unet. Therefore, in this case, (19) is equivalent to assuming the trained network is Lipschitz.
We prove the following theorem in the supplement.
Theorem 1 (Convergence of DEGrad).
Assume that is Lipschitz (19), and let and , where and
denote the maximum and minimum eigenvalue, respectively. If the stepsize parameter
is such that , then the DEGrad iteration map defined in (5) satisfiesfor all . The coefficient is less than if , in which case the the iterates of DEGrad converge.
Proof:
Let be the iteration map for DEGrad. The Jacobian of with respect to is given by
where is the Jacobian of with respect to . To prove is contractive it suffices to show for all where denotes the spectral norm. Towards this end, we have
(20) 
where denotes the th eigenvalue of , and in the final inequality (20) we used our assumption that the map is Lipschitz, and therefore the spectral norm of its Jacobian is bounded by .
Finally, by our assumption where , we have for all , which implies for all . Therefore, the maximum in (20) is obtained at , which gives
This shows is Lipschitz with , proving the claim.
Convergence of PnP approaches PnPProx and PnPADMM was studied in [30]. At inference time, the proposed DEProx and DEADMM methods are equivalent to the corresponding PnP method but with a retrained denoising network . Therefore, the convergence results in [30] apply directly to DEProx and DEADMM. To keep the paper selfcontained, we restate these results below, specialized to the case of the quadratic datafidelity term assumed in (2).
Theorem 2 (Convergence of DEProx).
See Theorem 1 of [30].
Theorem 3 (Convergence of DeAdmm).
See Corollary 1 of [30].
Unlike the convergence result for DEGrad in Theorem 1, the convergence results for DEProx and DEADMM in Theorem 2 and Theorem 3 make the assumption that , i.e., has a trivial nullspace. This is condition is satisfied for certain inverse problems, such as denoising or deblurring, but violated in many others, including compressed sensing and undersampled MRI. However, in practice we observe that the iterates of DEProx and DEADMM still appear to converge even in situations where has a nontrivial nullspace, indicating this assumption may be stronger than necessary.
Finally, an important practical concern when training deep equilibrium models is whether the fixedpoint iterates used to compute gradients (as detailed in Section 4.2) will converge. Specifically, the gradient of the loss at the training pair involves computing the truncated Neumann series in (17). This series converges if the Jacobian has spectral norm less than when evalated at any , which is true if and only if is contractive. Therefore, the same conditions in Theorems 13 that ensure the iteration map is contractive also ensure that the Neumann series in (17) used to compute gradients converges.
6 Experimental Results
6.1 Comparison Methods and Inverse Problems
Our numerical experiments include comparisons with a variety of models and methods. Totalvariation Regularized Least Squares (TV) is an important baseline that does not use any training data but rather leverages geometric models of image structure [29, 31, 5]. The PnP and RED methods are described in Section 2.3; we consider both the original ADMM variant of [34] PnPADMM and a proximal gradient PnPProx method as described in [30]. We utilize the ADMM formulation of RED. Deep Unrolled methods (DU) are described in Section 2.1; we consider DU using gradient descent, proximal gradient, and ADMM. The preconditioned Neumann network [14] represents the state of the art in unrolled approaches but does not have simple Deep Equilibrium or PlugandPlay analogues.
We compare the above approaches across three inverse problems: Gaussian deblurring (Deblur), 8 Gaussian compressed sensing (CS), and 8 accelerated Cartesian singlecoil MRI reconstruction (MRI). The compressed sensing and singlecoil MRI measurements are corrupted with additive Gaussian noise with . Deblur (1) is blurred with additive Gaussian noise with , and Deblur (2) is corrupted with additive Gaussian noise with .
For deblurring and compressed sensing, we utilize a subset of the Celebrity Faces with Attributes (CelebA) dataset [18], which consists of centered human faces. We train on a subset of 10000 of the training images. All images are resized to 128128. For the singlecoil MRI problem, we use a random subset of size 2000 of the fastMRI singlecoil knee dataset [37] for training. We trim the fullysampled images so they are 320320 pixels.
6.2 Architecture Specifics
For our learned network, we utilize a UNet architecture [28]
with some modifications. First, we have removed all instance normalization layers. For both the CelebA and fastMRI datasets, we train six UNet denoisers with noise variances
on the training split.For the CelebA set, to ensure contractivity of the learned component, we add spectral normalization to all layers [4], ensuring that each layer has a Lipschitz constant bounded above by 1. This normalization is enforced during pretraining as well as during the Deep Equilibrium training phrase.
We found that adding spectral normalization resulted in significant PSNR drops on the fastMRI dataset, and so did not use spectral normalization for the fastMRI UNets. Instead, we initialized the UNets before pretraining by sampling kernel weights from a random Gaussian distribution with
. Empirically, this initialization provides sufficient expressive power, but without causing a lack of contractivity, which can cause significant problems during training.Further details on settings, parameter choices, and data may be found in the appendix and in our publiclyavailable code.^{3}^{3}3Available at: https://github.com/dgilton/deep_equilibrium_inverse
6.3 Parameter Tuning and Pretraining
The proposed approaches to solving inverse problems via Deep Equilibrium are all based on some iterative optimization algorithm. Each of these iterative optimization algorithms has their own set of hyperparameters to choose,
e.g., the step size in DEGrad, plus any parameters used to calculate the initial estimate .We choose to fix all algorithmspecific hyperparameters prior to training a Deep Equilibrium network, for clarity and ease of training. We perform a grid search over algorithmspecific hyperparameters, testing the performance of the untrained Deep Equilibrium network on a heldout test set.
Tuning hyperparameters requires choosing a particular during tuning. We use an that has been pretrained for Gaussian denoising. Pretraining can be done on the target dataset (e.g., training on MRI images directly) or using an independent dataset (e.g., the BSD500 image dataset [20]). We use the former approach in our experiments. In the supplemental materials we show that pretraining provides a small improvement in reconstruction accuracy over random initialization.
Because we initialize our learned components with denoisers, the initial setup of our method exactly corresponds to tuning a PnP approach with a deep denoiser. Training adapts the iteration map to a particular inverse problem and data distribution. We note that our approach may be used to adapt any iterative optimization framework satisfying the conditions in Section 4, e.g., solvers for RED [27].
6.4 Main Results
We present the main reconstruction accuracy comparison in Table 1. Each entry for Deep Equilibrium (DE), Regularization by Denoising (RED), and PlugandPlay (PnP) approaches is the result of running fixedpoint iterations until the relative change between iterations is less than . During training, all DE models were limited to 50 forward updates. The DU models are tested at the number of iterations for which they were trained and all parameters for TV reconstructions (including number of TV iterations) are crossvalidated to maximize PSNR. Performance as a function of iteration is shown in Figs. 1(a), 5(a), and 5(b), with example reconstructions in Fig. 6. Further example reconstructions are available for qualitative evaluation in Appendix 8. We observe our DE reconstructions improve reconstruction quality beyond the number of iterations they were trained for.
We observe our DEbased approaches consistently outperform DU approaches across choices. Among choices of iterative reconstruction architectures for , DEProx is a frontrunner. However, some of the differences may be due to DEADMM not being accelerated, while DEProx and DEGrad leverage Anderson acceleration.
Across problems, the DE approach is generally competitive with the endtoend trained DU solvers. For two of three problems, our approach requires no more computation to be competitive with a DU network, with increasing advantage to our approach with further computation.
6.5 Effect of PreTraining
Here we compare the effect of initializing the learned component in our deep equilibrium models with a pretrained denoiser versus initializing with random weights. We use identical hyperparameters for both initialization methods.
We present our results on Deep Equilibrium Proximal Gradient Descent (DEProx) and Deep Equilibrium ADMM (DEADMM) in Figure 7. We observe an improvement in reconstruction quality when utilizing our pretraining method compared to a random initialization. We also note that pretraining enables a simpler choice of algorithmspecific hyperparameters. For example, with random initialization, choosing the proper internal step size for DEProx would require training several different DEProx instances, and choosing the correct step size based on validation set, in addition to any other parameters, such as the learning rate used during training.
6.6 Noise Sensitivity
We observe empirically that the Deep Equilibrium approach to training achieves competitive reconstruction quality and increased flexibility with respect to allocating computation budget. Recent work in deep inversion has questioned these methods’ robustness to noise and unexpected inputs [2, 26, 13].
To examine whether the Deep Equilibrium approach is brittle to simple changes in the noise distribution, we varied the level of Gaussian noise added to the observations at test time and observed the effect on reconstruction quality. Fig. 7 demonstrates that the Deep Equilibrium model DEProx is more robust to variation in the noise level than the analogous Deep Unrolled approach DUProx. The forward model used in Fig. 7 is MRI reconstruction.
7 Conclusions
This paper illustrates nontrivial quantitative benefits to using implicitlydefined infinitedepth networks for solving linear inverse problems in imaging. Other recent work has focused on such implicit networks akin to the deep equilibrium models considered here (e.g. [12]). Whether these models could lead to additional advances in image reconstruction remains an open question for future work. Furthermore, while the exposition in this work focused on linear inverse problems, nonlinear inverse problems may be solved with iterative approaches just as well. The conditions under which deep equilibrium methods proposed here may be used on such iterative approaches are an active area of investigation.
References
 [1] (2018) Learned primaldual reconstruction. IEEE transactions on medical imaging 37 (6), pp. 1322–1332. Cited by: §2.1.
 [2] (2020) On instabilities of deep learning in image reconstruction and the potential costs of ai. Proceedings of the National Academy of Sciences. Cited by: §6.6.
 [3] (2018) Trellis networks for sequence modeling. arXiv preprint arXiv:1810.06682. Cited by: §2.2.
 [4] (2019) Deep equilibrium models. In Advances in Neural Information Processing Systems, pp. 690–701. Cited by: §1, §2.2, §4.2, §6.2.
 [5] (2009) Fast gradientbased algorithms for constrained total variation image denoising and deblurring problems. IEEE transactions on image processing 18 (11), pp. 2419–2434. Cited by: §6.1.
 [6] (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc. Cited by: §2.3, §3.
 [7] (2016) Plugandplay admm for image restoration: fixedpoint convergence and applications. IEEE Transactions on Computational Imaging 3 (1), pp. 84–98. Cited by: §2.3.
 [8] (2018) . In Advances in neural information processing systems, pp. 6571–6583. Cited by: §2.2.
 [9] (2020) Momentumnet: fast and convergent iterative neural network for inverse problems. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.1.
 [10] (2007) Image denoising by sparse 3d transformdomain collaborative filtering. IEEE Transactions on image processing 16 (8), pp. 2080–2095. Cited by: §1, §2.3.

[11]
(2019)
Recurrent stacking of layers for compact neural machine translation models
. InProceedings of the AAAI Conference on Artificial Intelligence
, Vol. 33, pp. 6292–6299. Cited by: §2.2.  [12] (2019) Implicit deep learning. arXiv preprint arXiv:1908.06315. Cited by: §7.
 [13] (2020) Solving inverse problems with deep neural networks–robustness included?. arXiv preprint arXiv:2011.04268. Cited by: §6.6.
 [14] (2019) Neumann networks for linear inverse problems in imaging. IEEE Transactions on Computational Imaging. Cited by: §6.1.
 [15] (2017) Stable architectures for deep neural networks. Inverse Problems 34 (1), pp. 014004. Cited by: §2.2.
 [16] (2020) Deep implicit layers  neural odes, deep equilibirum models, and beyond. External Links: Link Cited by: §4.2, footnote 2.
 [17] (2020) Rare: image reconstruction using deep priors learned without groundtruth. IEEE Journal of Selected Topics in Signal Processing 14 (6), pp. 1088–1099. Cited by: §2.3.

[18]
(201512)
Deep learning face attributes in the wild.
In
Proceedings of International Conference on Computer Vision (ICCV)
, Cited by: §6.1.  [19] (2018) Neural proximal gradient descent for compressive imaging. In Advances in Neural Information Processing Systems, pp. 9573–9583. Cited by: §3.
 [20] (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vol. 2, pp. 416–423. Cited by: §6.3.
 [21] (2020) Modelbased deep learning pet image reconstruction using forwardbackward splitting expectation maximisation. IEEE Transactions on Radiation and Plasma Medical Sciences. Cited by: §2.1.
 [22] (2017) Learning proximal operators: using denoising networks for regularizing inverse imaging problems. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1781–1790. Cited by: §2.3.
 [23] (2020) Stateoftheart machine learning mri reconstruction in 2020: results of the second fastmri challenge. arXiv preprint arXiv:2012.06318. Cited by: §2.1.
 [24] (2020) Deep learning techniques for inverse problems in imaging. arXiv preprint arXiv:2005.06001. Cited by: §1, §2.1, §2.3.
 [25] (2014) Proximal algorithms. Foundations and Trends in optimization 1 (3), pp. 127–239. Cited by: §3.
 [26] (2020) Improving robustness of deeplearningbased image reconstruction. arXiv preprint arXiv:2002.11821. Cited by: §6.6.
 [27] (2017) The little engine that could: regularization by denoising (red). SIAM Journal on Imaging Sciences 10 (4), pp. 1804–1844. Cited by: §2.3, §6.3.
 [28] (2015) Unet: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computerassisted intervention, pp. 234–241. Cited by: §2.3, §6.2.
 [29] (1992) Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60 (14), pp. 259–268. Cited by: §1, §6.1.
 [30] (2019) Plugandplay methods provably converge with properly trained denoisers. In International Conference on Machine Learning, pp. 5546–5557. Cited by: §2.3, §5, §5, §5, §5, §6.1.
 [31] (2003) Edgepreserving and scaledependent properties of total variation regularization. Inverse problems 19 (6), pp. S165. Cited by: §6.1.
 [32] (1943) On the stability of inverse problems. In Dokl. Akad. Nauk SSSR, Vol. 39, pp. 195–198. Cited by: §1.
 [33] (2019) Superresolution via imageadapted denoising cnns: incorporating external and internal learning. IEEE Signal Processing Letters 26 (7), pp. 1080–1084. Cited by: §2.3.
 [34] (2013) Plugandplay priors for model based reconstruction. In 2013 IEEE Global Conference on Signal and Information Processing, pp. 945–948. Cited by: §2.3, §6.1.
 [35] (2011) Anderson acceleration for fixedpoint iterations. SIAM Journal on Numerical Analysis 49 (4), pp. 1715–1735. Cited by: §4.1.
 [36] (2019) Computationally efficient deep neural network for computed tomography image reconstruction. Medical physics 46 (11), pp. 4763–4776. Cited by: §2.1.
 [37] (2018) fastMRI: an open dataset and benchmarks for accelerated MRI. External Links: 1811.08839 Cited by: §6.1.

[38]
(2017)
Learning deep cnn denoiser prior for image restoration.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 3929–3938. Cited by: §2.3.
8 Appendix
8.1 Further Qualitative Results
In this section, we provide further visualizations of the reconstructions produced by Deep Equilibrium models and the corresponding Deep Unrolled approaches, beyond those shown in the main body. Figures 10, 9, and 8 are best viewed electronically, and contain the groundtruth images, the measurements (projected back to image space in the case of MRI and compressed sensing), and reconstructions by DUProx and DEProx.
We also visualize the intermediate iterates in the fixedpoint iterations, to further demonstrate the convergence properties of DEMs for image reconstruction. We find that DEMs converge quickly to reasonable reconstructions, and maintain highquality reconstructions after more than one hundred iterations.
Ground Truth  

IFFT  
DUProx  
DEProx (Ours) 
Ground Truth  

DUProx  
DEProx (Ours) 
Ground Truth  

Measure ments  
DUProx  
DEProx (Ours) 
8.2 Visualizing Iterates
In Figures 13, 12, and 11 we visualize the outputs of the ’th iteration of the mapping in DEProx. Recall that during training, DEProx was limited to 50 forward iterations. We observe that across forward problems, the reconstructions continue converging after this number of iterations.
We illustrate 90 iterations for deblurring and MRI reconstructions, and illustrate 190 iterations for compressed sensing, since that problem requires additional iterations to converge.
For further illustration we also demonstrate the qualitative effects of running DUProx for more iterations than it was trained for.
K=0  K=10  K=20  K=30  K=40 

K=50  K=60  K=70  K=80  K=90 
K=0  K=10  K=20  K=30  K=40  K=50  K=60  K=70  K=80  K=90 

K=100  K=110  K=120  K=130  K=140  K=150  K=160  K=170  K=180  K=190 
K=0  K=10  K=20  K=30  K=40 

K=50  K=60  K=70  K=80  K=90 
K=0  K=10  K=20  K=30  K=40 
K=50  K=60  K=70  K=80  K=90 
K=0  K=10  K=20  K=30  K=40 
K=50  K=60  K=70  K=80  K=90 
8.3 Further Experimental Details
In this section we provide further details related to the experimental setup.
The input to every learned algorithm is the preconditioned measurement , where is generally set to be equal to the noise level . For MRI reconstruction experiments, was used. The masks used in the MRI reconstruction experiments are based on a Cartesian sampling pattern, as in the standard fastMRI setting. The center 4 of frequencies are fully sampled, and further frequencies are sampled according to a Gaussian distribution centered at 0 frequency with .
The compressed sensing design matrices have entries sampled and scaled so that each entry is drawn from a Gaussian distribution with variance , where . The same design matrix is used for all learned methods.
Optimization algorithm parameters for RED, PlugandPlay, and all Deep Equilibrium approaches are all chosen via a logarithmic grid search from to with 20 elements in each dimension of the grid. All DU methods were trained for 10 iterations. All testing was done on an NVidia Titan X. All networks were trained on a cluster with a variety of computing resources ^{4}^{4}4See: https://slurm.ttic.edu/. Every experiment was run utilizing a single GPUsingle CPU setup with less than 12 GB of GPU memory.
Comments
There are no comments yet.