Signal Augmentation Method based on Mixing and Adversarial Training for Better Robustness and Generalization

zhang, li (proxy) (contact); Zhou, Gang; Sun, Gangyin; Wu, Chaopeng

10.23919/JCN.2024.000067

Abstract : More and more deep learning methods have been applied to wireless communication systems. However, the collection of authentic signal data poses challenges. Moreover, due to the vulnerability of neural networks, adversarial attacks seriously threaten the security of communication systems based on deep learning models. Traditional signal augmentation methods expand the dataset through transformations such as rotation and flip, but these methods improve the adversarial robustness of the model little. However, common methods to improve adversarial robustness such as adversarial training not only have a high computational overhead but also potentially lead to a decrease in accuracy on clean samples. In this work, we propose a signal augmentation method called Adversarial and Mixed-based Signal Augmentation (AMSA). The method can improve the adversarial robustness of the model while expanding the dataset and does not compromise the generalization ability. It combines adversarial training with data mixing and then interpolates selected pairs of samples to form new samples in an expanded dataset consisting of original and adversarial samples thus generating more diverse data. We conduct experiments on the RML2016.10a and RML2018.01a datasets using automatic modulation recognition (AMR) models based on CNN, LSTM, CLDNN, and Transformer. And compare the performance in scenarios with different numbers of samples. The results show that AMSA allows the model to achieve comparable or even better adversarial robustness than using adversarial training, and reduces the degradation of the model's generalization performance on clean data.​

Index terms : Adversarial training , automatic modulation recognition , data augmentation , mixing signals , robustness