Recognition of local features for camera-based sign-language recognition system

Kazuyuki Imagawa, Rin Ichiro Taniguchi, Daisaku Arita, Hideaki Matsuo, Shan Lu, Seiji Igi

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


A sign-language recognition system should use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We designed a system that first selects possible words by using the detected global features, then narrows the choices down to one by using the detected local features. In this paper, we describe an adequate local feature recognizer for a sign-language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols corresponding to clusters by using a clustering technique. The clusters are created from a training set of extracted hand images so that images with a similar appearance can be classified into the same cluster in an eigenspace. Experimental results showed that our system can recognize a signed word even in two-handed and hand-to-hand contact cases.

Original languageEnglish
Pages (from-to)848-857
Number of pages10
JournalKyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers
Issue number6
Publication statusPublished - 2000

All Science Journal Classification (ASJC) codes

  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Recognition of local features for camera-based sign-language recognition system'. Together they form a unique fingerprint.

Cite this