Abstract
Ultrasound beamformers play a critical role in medical imaging, and understanding their robustness under worst-case scenarios is essential for reliable performance. This study investigates the adversarial robustness of two beamformers that used deep neural networks (DNN) trained in an end-to-end fashion by producing B-mode reconstructions directly from raw ultrasound channel data. Results reveal contrasting behaviors under adversarial perturbations. The initially superior beamformer in clean cases, becomes highly susceptible to perturbations, resulting in irregular inclusion shapes and artifacts while the other exhibiting greater resistance. Image quality metrics confirm these findings, with drops of up to 50 dB for one beamformer while the other decreasing 10 dB. Differences in target data and learned transformations in DNNs contribute to these contrasting behaviors. Overall, this study sheds light on DNN-based beamformer robustness and provides insights for future design considerations.