Computer and Modernization ›› 2023, Vol. 0 ›› Issue (10): 17-22.doi: 10.3969/j.issn.1006-2475.2023.10.003

Previous Articles     Next Articles

Feature-level Multimodal Fusion for Depression Recognition

  

  1. (School of Computer Science, South China Normal University, Guangzhou 510631, China)
  • Online:2023-10-26 Published:2023-10-26

Abstract: Abstract: Depression is a common psychiatric disorder. However, the existing diagnostic methods for depression mainly rely on scales and interviews with psychiatrists, which are highly subjective. In recent years, researchers have devoted themselves to identifying depressed patients by EEG features or audio features, but no study has effectively combined EEG information with audio information, ignoring the correlation between audio and EEG data. Therefore, this study proposes a feature-level multimodal fusion model to improve the accuracy of depression recognition. We combine the audio and EEG modality information based on a fully connected neural network. Our experiments show that the accuracy of depression recognition using feature-level multimodal fusion model on the MODMA dataset reaches 81.58%, which is higher than that of using single-modality. The results indicate that the feature-level multimodal fusion model can improve the accuracy of depression recognition compared to single-modality. Our research provides a new perspective and method for depression recognition.

Key words: Key words: multimodal data fusion, depression detection, feature-level fusion, fully-connected neural networks

CLC Number: