Speech synthesis by mimicking articulatory movements

Masaaki Honda, Tokihiko Kaburagi, Takeshi Okadome

Research output: Contribution to journalConference articlepeer-review

5 Citations (Scopus)


We describe a computational model of speech production which consists of trajectory formation, for generating articulatory movements from a phoneme specific gesture, and articulatory-to-acoustic mapping for generating speech signal from the articulatory motion. The context-dependent and independent approaches in the task-oriented trajectory formation are presented form a viewpoint how to cope with the contextual variability in the articulatory movements. The model is evaluated by comparing the computed and the original articulatory trajectories and speech acoustics. Also, we describe a recovery of the articulatory motion from speech acoustics for generating the articulatory movements by mimicking speech acoustics.

Original languageEnglish
Pages (from-to)II-463 - II-468
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Publication statusPublished - 1999
Externally publishedYes
Event1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn
Duration: Oct 12 1999Oct 15 1999

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Hardware and Architecture


Dive into the research topics of 'Speech synthesis by mimicking articulatory movements'. Together they form a unique fingerprint.

Cite this