Knowledge-based regularization in generative modeling

Naoya Takeishi, Yoshinobu Kawahara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)


Prior domain knowledge can greatly help to learn generative models. However, it is often too costly to hard-code prior knowledge as a specific model architecture, so we often have to use general-purpose models. In this paper, we propose a method to incorporate prior knowledge of feature relations into the learning of general-purpose generative models. To this end, we formulate a regularizer that makes the marginals of a generative model to follow prescribed relative dependence of features. It can be incorporated into off-the-shelf learning methods of many generative models, including variational autoencoders and generative adversarial networks, as its gradients can be computed using standard backpropagation techniques. We show the effectiveness of the proposed method with experiments on multiple types of datasets and generative models.

Original languageEnglish
Title of host publicationProceedings of the 29th International Joint Conference on Artificial Intelligence, IJCAI 2020
EditorsChristian Bessiere
PublisherInternational Joint Conferences on Artificial Intelligence
Number of pages7
ISBN (Electronic)9780999241165
Publication statusPublished - 2020
Event29th International Joint Conference on Artificial Intelligence, IJCAI 2020 - Yokohama, Japan
Duration: Jan 1 2021 → …

Publication series

NameIJCAI International Joint Conference on Artificial Intelligence
ISSN (Print)1045-0823


Conference29th International Joint Conference on Artificial Intelligence, IJCAI 2020
Period1/1/21 → …

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence


Dive into the research topics of 'Knowledge-based regularization in generative modeling'. Together they form a unique fingerprint.

Cite this