Self-augmented multi-modal feature embedding

Shinnosuke Matsuo, Seiichi Uchida, Brian Kenji Iwana

研究成果: ジャーナルへの寄稿会議記事査読

1 被引用数 (Scopus)

抄録

Oftentimes, patterns can be represented through different modalities. For example, leaf data can be in the form of images or contours. Handwritten characters can also be either online or offline. To exploit this fact, we propose the use of self-augmentation and combine it with multi-modal feature embedding. In order to take advantage of the complementary information from the different modalities, the self-augmented multi-modal feature embedding employs a shared feature space. Through experimental results on classification with online handwriting and leaf images, we demonstrate that the proposed method can create effective embeddings.

本文言語英語
ページ(範囲)3995-3999
ページ数5
ジャーナルICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2021-June
DOI
出版ステータス出版済み - 2021
イベント2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, カナダ
継続期間: 6月 6 20216月 11 2021

!!!All Science Journal Classification (ASJC) codes

  • ソフトウェア
  • 信号処理
  • 電子工学および電気工学

フィンガープリント

「Self-augmented multi-modal feature embedding」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル