Abstract:
Image restoration deals with the removal of noise, blurriness, missing patches, and other
kinds of distortions in broken images. Traditional reconstruction and restoration approaches suffer from different kinds of limitations. In our work, we have improved upon
those models by introducing novel structure loss that emphasizes the overall image structure rather than individual pixels. Our proposed model StructGAN can achieve a higher
SSIM (Structural Similarity Index Measure) score while not massively compromising other
noise metrics. Overall, our proposed model uses generative adversarial networks with a
two-step generator network, a dual discriminator network, and coherent semantic attention (CSA) layer. The two-step generator helps refine the output. The dual discriminator
ensures local and global correctness. The CSA layer ensures semantic consistency. Along
with these, our model incorporates the novel structure loss. The structure loss is based
on the Laplacian filter that calculates the overall structure-map of the image and tries to
replicate the structure-map in the generation step. The results obtained by our model are
qualitatively comparable to the performance of the state-of-the-art models. For certain
metrics, e.g. SSIM, StructGAN quantitatively outperforms other models.
Description:
Supervised by
Prof. Md. Hasanul Kabir,
Department of Computer Science and Engineering(CSE),
Islamic University of Technology(IUT),
Board Bazar, Gazipur-1704, Bangladesh