Abstract
Generating realistic images is one of the important problems in the field of computer vision. In image generation tasks, generating images consistent with an input given by the user is called conditional image generation. Due to the recent advances in generating high-quality images with Generative Adversarial Networks, many conditional image generation models have been proposed, such as text-to-image, scene-graph-to-image, and layout-to-image models. Among them, scene-graph-to-image models have the advantage of generating an image for a complex situation according to the structure of a scene graph. However, existing scene-graph-toimage models have difficulty in capturing positional relations among three or more objects since a scene graph can only represent relations between two objects. In this paper, we propose a novel image generation model which addresses this shortcoming by generating images from a hyper scene graph with trinomial edges. We also use a layout-to-image model supplementally to generate higher resolution images. Experimental validations on COCO-Stuff and Visual Genome datasets show that the proposed model generates more natural and faithful images to user’s inputs than a cutting-edge scene-graph-to-image model.
Original language | English |
---|---|
Pages (from-to) | 185-195 |
Number of pages | 11 |
Journal | Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Volume | 5 |
DOIs | |
Publication status | Published - 2023 |
Event | 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2023 - Lisbon, Portugal Duration: Feb 19 2023 → Feb 21 2023 |
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design
- Computer Vision and Pattern Recognition
- Human-Computer Interaction