Enhance Adversarial Attack against Trajectory Prediction Model by Considering the Characteristics of Trajectory Data and Model

Authors: Xin Guo, Yucheng Shi, Zhenghan Gao, Guangyao Bai, Yufei Gao, Lei Shi, and Wenwen li
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 3152-3166
Keywords: adversarial example, gradient-based attack, trajectory prediction, robustness evaluation

Abstract

Recent studies have revealed the vulnerability of trajectory prediction TP models to gradient-based adversarial attacks. However, existing gradient-based attacks overlook the characteristics of trajectory data and model, limiting their effectiveness in robustness evaluation. Therefore, we propose a gradient-based attack algorithm considering both rich physical information in data and common characteristics in model. For the data aspect, unlike adversarial attacks in the image, trajectory data carries more physical information than pixels. Considering this, our method introduces momentum in gradient-updates to reserve physical information in previous iterations to keep the generated adversarial trajectory realistic. And in order to search for more possible adversarial trajectories, it updates with different initial states and update with different step sizes. For the aspect of model, unlike convolutional neural networks CNNs used for image recognition, trajectory prediction models are typically based on recurrent neural networks RNNs which tend to focus more on specific points within the data rather than treating all inputs with equal importance. An attention loss function is designed to guide the attack to focus on that the model concern about. Experiments on three models and two datasets show that our attack algorithm increases mean displacement error ADE over 7.94 of the trajectory prediction error compared to previous state-of-the-art gradient-based attack. Our code is open source at Github : https: anonymous.4open.science' RMS-PGD.
📄 View Full Paper (PDF) 📋 Show Citation