Deep Image Prior

Deep Image Prior employs a U-Net architecture to denoise and inpaint images. The U-Net’s weights are optimized using gradient descent to generate the prior image x*.
The loss function for image prior generation.
Our approach can restore an image with a complex degradation (JPEG compression in this case). As the optimization process progresses, the deep image prior allows to recover most of the signal while getting rid of halos and blockiness (after 2400 iterations) before eventually overfitting to the input (at 50K iterations).
In many cases, deep image prior is sufficient to successfully inpaint large regions. Despite using no learning, the results may be comparable to [15] which does. The choice of hyper-parameters is important (for example (d) demonstrates sensitivity to the learning rate), but a good setting works well for most images we tried.
Inpainting using different depths and architectures. The figure shows that much better inpainting results can be obtained by using deeper random networks. However, adding skip connections to ResNet in U-Net is highly detrimental.
Integrating the posterior (weighted average) to generate the final result x* without early stopping
  • The paper is well written and provides a different perspective on deep learning methods. The idea is simple; so it is an easy and nice paper to read.
  • Both [1] & [2] implementations are released on Github.
  • I wish the authors elaborated more on computational complexity. “taking several minutes of GPU computation per image” is a vague wording. does it take 3 or 20 seconds?
  • From my perspective, the main limitation of this paper is not the computational complexity but when to terminate the optimization? I am glad the reviewers didn’t reject the paper for this limitation. Fortunately, this issue is addressed in a follow-up paper[2].

--

--

--

I write reviews on computer vision papers. Writing tips are welcomed.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

“Theory of Vibe”: A conversation with Peli Grietzer

Image result for autoencoders

AlexNet: The CNN that changed Computer Vision

A visualization of the AlexNet Network

Sorting Dating Profiles with Machine Learning and Python

Selecting the Best Architecture for Artificial Neural Networks

Deep Dive into Back Propagation (III)

Word Sense Disambiguation

Tuning a Random Forest Classifier

Simple Regression - Machine Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ahmed Taha

Ahmed Taha

I write reviews on computer vision papers. Writing tips are welcomed.

More from Medium

Original U-Net in PyTorch

Image of Semantic Segmentation

Blood Spectroscopy to Image Classification

Review — Motion Masks: Learning Features by Watching Objects Move

MetaFormer: De facto need for Vision?