Pitfalls for categorizations of objective interestingness measures for rule discovery

Research output: Chapter in Book/Report/Conference proceedingChapter

18 Citations (Scopus)

Abstract

In this paper, we point out four pitfalls for categorizations of objective interestingness measures for rule discovery. Rule discovery, which is extensively studied in data mining, suffers from the problem of outputting a huge number of rules. An objective interestingness measure can be used to estimate the potential usefulness of a discovered rule based on the given data set thus hopefully serves as a countermeasure to circumvent this problem. Various measures have been proposed, resulting systematic attempts for categorizing such measures. We believe that such attempts are subject to four kinds of pitfalls: data bias, rule bias, expert bias, and search bias. The main objective of this paper is to issue an alert for the pitfalls which are harmful to one of the most important research topics in data mining. We also list desiderata in categorizing objective interestingness measures.

Original languageEnglish
Title of host publicationStatistical Implicative Analysis
Subtitle of host publicationTheory and Applications
EditorsRégis Gras, Einoshin Suzuki, Fabrice Guillet, Filippo Spagnolo
Pages383-395
Number of pages13
DOIs
Publication statusPublished - 2008

Publication series

NameStudies in Computational Intelligence
Volume127
ISSN (Print)1860-949X

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Pitfalls for categorizations of objective interestingness measures for rule discovery'. Together they form a unique fingerprint.

Cite this