Image denoising and inpainting with deep neural networks
J.xie,L.Xu,E.Chen, Advances in neural information processing, 2012,PP:1-6
Sparse coding and deep networks is pre-trained with auto-encoders for image denoising and inpainting. This paper deals with AWGN and salt & pepper noise for image denoising. Image inpainting includes two methods:
Non-blind painting: regions to be filled are given in prior to the algorithm.
Blind painting: algorithm automatically finds the region to be filled, which is very difficult to be solved.
The training scheme that trains the denoising auto-encoder(DA) is a 2 layer network. Series of DAs are stacked together to form a deep network called Stacked Denoising Auto-encoders(SDA). After training the first layer, the hidden layer activations (sigmoid activation) of both the noisy input and the clean input serves as the training data for the second layer. The noisy patch is created during training by adding AWGN to the clean image patch. The results are compared with KSVD and BLS-GSM algorithms and evaluated by using PSNR. Stacked SDA gives clear borders , texture details compared with other two algorithms and is better in denoising complex regions. By adding more layers improves the performance but requires more training time.
Image inpainting focus on text removal and follows blind inpainting method. Results are compared with KSVD algorithm though KSVD is a non-blind algorithm. SSDA eliminated text of small fonts while larger fonts are dimmed.
Advantages and limitation:
1. fully automatic
2. relies on supervised training
3. can remove noise patterns it has seen in training data
Note: for parameter tuning refer paper pg: 7
Non-blind painting: regions to be filled are given in prior to the algorithm.
Blind painting: algorithm automatically finds the region to be filled, which is very difficult to be solved.
The training scheme that trains the denoising auto-encoder(DA) is a 2 layer network. Series of DAs are stacked together to form a deep network called Stacked Denoising Auto-encoders(SDA). After training the first layer, the hidden layer activations (sigmoid activation) of both the noisy input and the clean input serves as the training data for the second layer. The noisy patch is created during training by adding AWGN to the clean image patch. The results are compared with KSVD and BLS-GSM algorithms and evaluated by using PSNR. Stacked SDA gives clear borders , texture details compared with other two algorithms and is better in denoising complex regions. By adding more layers improves the performance but requires more training time.
Image inpainting focus on text removal and follows blind inpainting method. Results are compared with KSVD algorithm though KSVD is a non-blind algorithm. SSDA eliminated text of small fonts while larger fonts are dimmed.
Advantages and limitation:
1. fully automatic
2. relies on supervised training
3. can remove noise patterns it has seen in training data
Note: for parameter tuning refer paper pg: 7