Deep learning models have advanced the state of the art of monaural speech separation. However, the performance of a separation model considerably decreases when tested on unseen speakers and noisy conditions. Separation models trained with data augmentation generalize better to unseen conditions. In this paper, we conduct a comprehensive survey of data augmentation techniques and apply them to improve the generalization of time-domain speech separation models. The augmentation techniques include seven source-preserving approaches (Gaussian noise, Gain, Time masking, frequency masking, Short noise, Time stretch, and Pitch shift) and three non-source preserving approaches (Dynamix mixing, Mixup, and Cutmix). After hyperparameter search for each augmentation method, we test the generalization of the augmented model by cross-corpus testing on three datasets (LibriMix, TIMIT, and VCTK), and identify the best augmentation combination that enhances generalization. Experimental results indicate that a combination of several non-source preserving strategies (CutMix, Mixup, and Dynamic mixing) resulted in the best generalization performance. Finally, the augmentation combinations also improved the performance of the speech separation model even when fewer training data are available.

Data augmentation for speech separation

Gastaldo P.;
2023-01-01

Abstract

Deep learning models have advanced the state of the art of monaural speech separation. However, the performance of a separation model considerably decreases when tested on unseen speakers and noisy conditions. Separation models trained with data augmentation generalize better to unseen conditions. In this paper, we conduct a comprehensive survey of data augmentation techniques and apply them to improve the generalization of time-domain speech separation models. The augmentation techniques include seven source-preserving approaches (Gaussian noise, Gain, Time masking, frequency masking, Short noise, Time stretch, and Pitch shift) and three non-source preserving approaches (Dynamix mixing, Mixup, and Cutmix). After hyperparameter search for each augmentation method, we test the generalization of the augmented model by cross-corpus testing on three datasets (LibriMix, TIMIT, and VCTK), and identify the best augmentation combination that enhances generalization. Experimental results indicate that a combination of several non-source preserving strategies (CutMix, Mixup, and Dynamic mixing) resulted in the best generalization performance. Finally, the augmentation combinations also improved the performance of the speech separation model even when fewer training data are available.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0167639323000778-main.pdf

accesso aperto

Tipologia: Documento in versione editoriale
Dimensione 3.16 MB
Formato Adobe PDF
3.16 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1141940
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact