TY - GEN
T1 - A Data-Driven Dynamic Discretization Framework to Solve Combinatorial Problems Using Continuous Metaheuristics
AU - Cisternas-Caneo, Felipe
AU - Crawford, Broderick
AU - Soto, Ricardo
AU - de la Fuente-Mella, Hanns
AU - Tapia, Diego
AU - Lemus-Romani, José
AU - Castillo, Mauricio
AU - Becerra-Rozas, Marcelo
AU - Paredes, Fernando
AU - Misra, Sanjay
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Combinatorial optimization problems are very common in the real world but difficult to solve. Among the promising algorithms that have been successful in solving these problems are metaheuristics. The two basic search behaviors used in metaheuristics are exploration and exploitation, and the success of metaheuristic search largely depends on the balance of these two behaviors. Machine learning techniques have provided considerable support to improve data-driven optimization algorithms. One of the techniques that stands out is Q-Learning, which is a reinforcement learning technique that penalizes or rewards actions according to the consequence it entails. In this work, a general discretization framework is proposed where Q-Learning can adapt a continuous metaheuristic to work in discrete domains. In particular, we use Q-learning so that the algorithm learns an optimal binarization schemEqe selection policy. The policy is dynamically updated based on the performance of the binarization schemes in each iteration. Preliminary experiments using our framework with sine cosine algorithm show that the proposal presents promising results compared to other algorithms.
AB - Combinatorial optimization problems are very common in the real world but difficult to solve. Among the promising algorithms that have been successful in solving these problems are metaheuristics. The two basic search behaviors used in metaheuristics are exploration and exploitation, and the success of metaheuristic search largely depends on the balance of these two behaviors. Machine learning techniques have provided considerable support to improve data-driven optimization algorithms. One of the techniques that stands out is Q-Learning, which is a reinforcement learning technique that penalizes or rewards actions according to the consequence it entails. In this work, a general discretization framework is proposed where Q-Learning can adapt a continuous metaheuristic to work in discrete domains. In particular, we use Q-learning so that the algorithm learns an optimal binarization schemEqe selection policy. The policy is dynamically updated based on the performance of the binarization schemes in each iteration. Preliminary experiments using our framework with sine cosine algorithm show that the proposal presents promising results compared to other algorithms.
KW - Combinatorial optimization
KW - Metaheuristics
KW - Q-Learning
KW - Swarm intelligence
UR - http://www.scopus.com/inward/record.url?scp=85104805846&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-73603-3_7
DO - 10.1007/978-3-030-73603-3_7
M3 - Conference contribution
AN - SCOPUS:85104805846
SN - 9783030736026
T3 - Advances in Intelligent Systems and Computing
SP - 76
EP - 85
BT - Innovations in Bio-Inspired Computing and Applications - Proceedings of the 11th International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA 2020
A2 - Abraham, Ajith
A2 - Sasaki, Hideyasu
A2 - Rios, Ricardo
A2 - Gandhi, Niketa
A2 - Singh, Umang
A2 - Ma, Kun
PB - Springer Science and Business Media Deutschland GmbH
T2 - 11th International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA 2020 and 10th World Congress on Information and Communication Technologies, WICT 2020
Y2 - 16 December 2020 through 18 December 2020
ER -