Image to Image Translation With Multi- Scale Generator

Show simple item record

dc.contributor.author Morshed, Mashrur Mahmud
dc.contributor.author Iqbal, Hasan Tanvir
dc.contributor.author Rishad, Mazharul Islam
dc.date.accessioned 2022-03-25T09:40:27Z
dc.date.available 2022-03-25T09:40:27Z
dc.date.issued 2021-03-30
dc.identifier.uri http://hdl.handle.net/123456789/1287
dc.description Supervised by Mr. Hasan Mahmud, Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.description.abstract Image to image translation is a highly generalized learning task, that canbe applied to a wide number of Computer Vision application domains. Condi-tional Generative Adversarial Networks (cGANs) are used to perform image toimage translation. The generator network typically used in the existing cGANapproach, Pix2Pix, adopts the U-Net architecture, consisting of encoding anddecoding convolutional layers and skip-connections between layers of the sameresolution. While effective and convenient, such an arrangement is also restrictivein some ways, as the feature reconstruction process in the decoder cannot utilizemulti-scale features. In our work, we study a generator architecture where featuremaps are propagated to the decoder from different resolution levels. We’ve exper-imentally shown improved performance on two different datasets — the NYU-V2depth dataset and the Labels2Facades dataset. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.title Image to Image Translation With Multi- Scale Generator en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics