Expansion of training texts to generate a topic-dependent language model for meeting speech recognition

Kazushige Egashira, Kazuya Kojima, Masaru Yamashita, Katsuya Yamauchi, Shoichi Matsunaga

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes expansion methods for training texts (baseline) to generate a topic-dependent language model for more accurate recognition of meeting speech. To prepare a universal language model that can cope with the variety of topics discussed in meetings is very difficult. Our strategy is to generate topic-dependent training texts based on two methods. The first is text collection from web pages using queries that consist of topic-dependent confident terms; these terms were selected from preparatory recognition results based on the TF-IDF (TF; Term Frequency, IDF; Inversed Document Frequency) values of each term. The second technique is text generation using participants' names. Our topic-dependent language model was generated using these new texts and the baseline corpus. The language model generated by the proposed strategy reduced the perplexity by 16.4% and out-of-vocabulary rate by 37.5%, respectively, compared with the language model that used only the baseline corpus. This improvement was confirmed through meeting speech recognition as well.

Original languageEnglish
Title of host publication2012 Conference Handbook - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012
Publication statusPublished - 2012
Externally publishedYes
Event2012 4th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012 - Hollywood, CA, United States
Duration: Dec 3 2012Dec 6 2012

Other

Other2012 4th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2012
Country/TerritoryUnited States
CityHollywood, CA
Period12/3/1212/6/12

All Science Journal Classification (ASJC) codes

  • Information Systems

Fingerprint

Dive into the research topics of 'Expansion of training texts to generate a topic-dependent language model for meeting speech recognition'. Together they form a unique fingerprint.

Cite this