Abstract:
Traditional approaches in speech emotion recognition, such as LSTM, CNN, RNN,
SVM, and MLP, have limitations such as difficulty capturing long-term dependen cies in sequential data, capturing the temporal dynamics, and struggling to capture
complex patterns and relationships in multimodal data. This research addresses
these shortcomings by proposing an ensemble model that combines Graph Con volutional Networks (GCN) for processing textual data and the HuBERT trans former for analyzing audio signals. We found that GCNs excel at capturing Long term contextual dependencies and relationships within textual data by leveraging
graph-based representations of text and thus detecting the contextual meaning
and semantic relationships between words. On the other hand, HuBERT utilizes
self-attention mechanisms to capture long-range dependencies, enabling the mod eling of temporal dynamics present in speech and capturing subtle nuances and
variations that contribute to emotion recognition. By combining GCN and Hu BERT, our ensemble model can leverage the strengths of both approaches. This
allows for the simultaneous analysis of multimodal data, and the fusion of these
modalities enables the extraction of complementary information, enhancing the
discriminative power of the emotion recognition system. The results indicate that
the combined model can overcome the limitations of traditional methods, leading
to enhanced accuracy in recognizing emotions from speech.
Description:
Supervised by
Dr. Hasan Mahmud,
Associate Professor,
Mr. Fardin Saad,
Lecturer,
Dr. Md. Kamrul Hasan,
Professor,
Department of Computer Science and Engineering(CSE),
Islamic University of Technology(IUT),
Board Bazar, Gazipur-1704, Bangladesh