Image denoising: Can plain neural networks compete with BM3D?
H.C.Burger, C.J. Schuler, S.Harmeling, IEEE conference on computer vision and pattern recognition, PP: 2392-2399
Introduction:
Image denoising is nothing but obtaining clear image patterns from a noisy image. Recently, a number of image denoising algorithms are available to denoise an image. This paper uses a plain multilayer perceptron(MLP) to denoise images. It achieves state of art performance by combining three factors :
Training MLP for denoising:
The noise is Additive White Gaussian Noise (AWGN). The MLP is a non linear function. The parameters of MLP are estimated by training on pairs of noisy and clean image patches using stochastic gradient descent. The weights are updated are using backpropagation algorithm. It estimates the quadratic error between the noisy and clean image patch. The backpropagation algorithm is made efficient by using the following tricks:
A noisy image is obtained by corrupting a clean image by additive white gaussian noise with sigma 25. The noisy image is decomposed into overlapping image patches. Each noisy patch is denoised separately. Denoised image is obtained by replacing the noisy patches with denoised patch.
Experimental setup:
Experiments are performed for gray scale images with images of size 512 x 512. Two training sets are used:
Results:
Networks were trained with different architectures and patch sizes. It is found that large capacity and training set obtains better performance.
Image denoising is nothing but obtaining clear image patterns from a noisy image. Recently, a number of image denoising algorithms are available to denoise an image. This paper uses a plain multilayer perceptron(MLP) to denoise images. It achieves state of art performance by combining three factors :
- Increasing the capacity helps to have enough hidden layers with sufficient hidden units
- Patch size is chosen large because the patch contains enough information to recover a noise free image.
- Training set is large.
Training MLP for denoising:
The noise is Additive White Gaussian Noise (AWGN). The MLP is a non linear function. The parameters of MLP are estimated by training on pairs of noisy and clean image patches using stochastic gradient descent. The weights are updated are using backpropagation algorithm. It estimates the quadratic error between the noisy and clean image patch. The backpropagation algorithm is made efficient by using the following tricks:
- Data normalization : Normalizes the pixel values with 0 mean and variance one.
- Weight initialization: weights are sampled from normal distribution.
- Learning rate division: The learning rate is divided by the number of input units in that layer. The learning is initially set to 0.1. No regularization is applied to weights.
A noisy image is obtained by corrupting a clean image by additive white gaussian noise with sigma 25. The noisy image is decomposed into overlapping image patches. Each noisy patch is denoised separately. Denoised image is obtained by replacing the noisy patches with denoised patch.
Experimental setup:
Experiments are performed for gray scale images with images of size 512 x 512. Two training sets are used:
- Small training set with 200 images
- Large training set with 150,000 images.
- Eleven standard images including lena, barbara, boat etc.
- 5oo random images from Pascal VOC & McGill.
Results:
Networks were trained with different architectures and patch sizes. It is found that large capacity and training set obtains better performance.
- Network is trained on large training set
- patch size 17x17
- Hidden layers 4 and size 2047.
MLP outperforms BM3D in 6 out of 11 images. BM3D performs well on images that has regular patterns. Though MLP doesn't outperform BM3D for all the images , it has outperformed KSVD and GSM algorithm for all images.
Conclusion:
Though MLP obtain better performance it doesn't generalize well for images with other noise levels. It is believed that MLP can obtain better performance by increasing the capacity and training set.
Software
Conclusion:
Though MLP obtain better performance it doesn't generalize well for images with other noise levels. It is believed that MLP can obtain better performance by increasing the capacity and training set.
Software