Adaptive Parameter Control in Particle Swarm Optimization Based on Proximal Policy Optimization

Authors: Ruiqi Fan, Lisong Wang, Shaohan Liu, Liang Liu, Fengtao Xu, Yizhuo Sun
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 2501-2518
Keywords: Particle swarm optimization , Reinforcement Learning , Proximal Policy Optimization.

Abstract

The performance of Particle Swarm Optimization algorithms when solving complex problems is highly dependent on parameter settings and prone to premature convergence. This paper proposes a Reinforcement Learning-based adaptive multi-subgroup PSO framework, termed PPOPSO, designed to enable an RL agent to learn online and dynamically adjust the core behavioral parameters of multiple parallel PSO subgroups.This framework utilizes Proximal Policy Optimization as the RL agent. Its decision-making process is informed by a state representation that incorporates rich historical performance indicators. Furthermore, it independently selects actions for each subgroup from a predefined set of parameter configurations, each representing different search strategies. The controlled multi-subgroup PSO system integrates a periodic elite particle migration mechanism to foster information sharing and maintain diversity among the subgroups.This design transforms the parameter adaptation challenge into a sequential decision-making process. This allows the system to autonomously balance exploration and exploitation based on the optimization stage and the state of each subgroup. Preliminary experimental results on the CEC2013 standard test function set indicate that, through dynamic parameter adjustment empowered by reinforcement learning, the proposed PPOPSO framework can exhibit superior performance compared to traditional methods, offering a promising new approach for complex optimization problems.
📄 View Full Paper (PDF) 📋 Show Citation