Speech production model based on articulatory movements

Masaaki Honda, Tokihiko Kaburagi

Research output: Contribution to journalArticlepeer-review


Speech is generated by articulating speech organs such as jaw, tongue, and lips according to their motor commands. We have developed speech production model that converts a phoneme-specific motor task sequence into a continuous acoustic signal. This paper describes a computational model of the speech production process which involves a motor process to generate articulatory movements from the motor task sequences and an aero-acoustic process in the vocal tract to produce speech signals. Simulation results on continuous speech production show that this model can accurately predict the actual articulatory movements and generate natural-sounding speech.

Original languageEnglish
Pages (from-to)87-92
Number of pages6
JournalNTT R and D
Issue number1
Publication statusPublished - 1995
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering


Dive into the research topics of 'Speech production model based on articulatory movements'. Together they form a unique fingerprint.

Cite this