Abstract
Speech is generated by articulating speech organs such as jaw, tongue, and lips according to their motor commands. We have developed speech production model that converts a phoneme-specific motor task sequence into a continuous acoustic signal. This paper describes a computational model of the speech production process which involves a motor process to generate articulatory movements from the motor task sequences and an aero-acoustic process in the vocal tract to produce speech signals. Simulation results on continuous speech production show that this model can accurately predict the actual articulatory movements and generate natural-sounding speech.
Original language | English |
---|---|
Pages (from-to) | 87-92 |
Number of pages | 6 |
Journal | NTT R and D |
Volume | 44 |
Issue number | 1 |
Publication status | Published - 1995 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering