A Survey of Reinforcement Learning Approaches for Tuning Particle Swarm Optimization

3rd International Conference on Chemo and BioInformatics, Kragujevac, September 25-26. 2025. (pp. 198-201) 

 

АУТОР(И) / AUTHOR(S): Bogdan Milićević, Vladimir Milovanović

 

Download Full Pdf   

DOI:  10.46793/ICCBIKG25.198M

САЖЕТАК / ABSTRACT:

Particle Swarm Optimization (PSO) remains a popular, simple, and strong baseline for numerical optimization, yet its performance depends critically on a small set of hyper-parameters (e.g., inertia weight w and cognitive and social coefficients c1, c2) and on structural design choices (e.g., topology, velocity clamps). Over the last decade, reinforcement learning (RL) has emerged as a principled, data-driven way to adapt these design choices online—either by directly controlling parameters, reshaping swarm interactions, selecting variation operators, or transferring control policies across runs. This survey systematizes RL–for–PSO tuning along four families: (1) direct parameter control, (2) topology/structure control, (3) operator/strategy selection, and (4) cross-run memory and transfer. We highlight representative methods— including tabular Q-learning, Deep Q-Networks (DQN), deterministic policy gradients (DDPG), and hybrid RL–PSO schemes—summarize empirical evidence, and distill practical design patterns (state, action, reward, and training protocols). We conclude with open challenges in stability, sample efficiency, safety-constrained control, and reproducible benchmarking

КЉУЧНЕ РЕЧИ / KEYWORDS:

particle swarm optimization, reinforcement learning, parameter tuning

ПРОЈЕКАТ / ACKNOWLEDGEMENT:

This research was supported by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia, contract number [451-03-136/2025-03/200378 (Institute of Information Technologies, University of Kragujevac) and 451-03-136/2025- 03/200107 (Faculty of Engineering, University of Kragujevac)].

ЛИТЕРАТУРА / REFERENCES:

  • J. Kennedy, R. Eberhart, Particle Swarm Optimization, Proceedings of the IEEE International Conference on Neural Networks (ICNN’95), 4 (1995) 1942–1948.
  • Y. Shi, R.C. Eberhart, Parameter Selection in Particle Swarm Optimization, Proceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC’98), (1998).
  • Z. Qin, L. Huang, X. Ding, Adaptive Inertia Weight Particle Swarm Optimization, in: ICAISC 2006, LNAI 4029, (2006) 450–459.
  • S. Kessentini, D. Barchiesi, Particle Swarm Optimization with Adaptive Inertia Weight, International Journal of Machine Learning and Computing, 5 (2015) 368–373.
  • Y. Liu, H. Lu, S. Cheng, Y. Shi, An Adaptive Online Parameter Control Algorithm for Particle Swarm Optimization Based on Reinforcement Learning, Proc. IEEE Congress on Evolutionary Computation (CEC), (2019) 815–822.
  • R. Olivares, F. Jaramillo, A Learning-Based Particle Swarm Optimizer for Solving Continuous Optimization Problems, Algorithms, 12(7) (2023) 643.
  • O. Aoun, Deep Q-Network-Enhanced Self-Tuning Control of Particle Swarm Optimization, Modelling, 5(4) (2024) 1709–1728.
  • S. Yin, M. Jin, H. Lu, G. Gong, W. Mao, G. Chen, W. Li, Reinforcement-Learning-Based Parameter Adaptation Method for Particle Swarm Optimization, Complex & Intelligent Systems, 9 (2023) 5585–5609.
  • Y. Xu, D. Pi, A Reinforcement-Learning-Based Communication Topology in Particle Swarm Optimization, Neural Computing and Applications, 32 (2020) 10007–10032.
  • W. Li, P. Liang, B. Sun, Y. Sun, Y. Huang, Reinforcement Learning-Based Particle Swarm Optimization with Neighborhood Differential Mutation Strategy (NRLPSO), Swarm and Evolutionary Computation, (2023) 101274.
  • L. Lu, H. Zheng, J. Jie, M. Zhang, R. Dai, Reinforcement Learning-Based Particle Swarm Optimization for Sewage Treatment Control, Complex & Intelligent Systems, 7 (2021) 2199–2210.