An adaptive approach for infinitely many-armed bandits under generalized rotting constraints

Kim, J., Vojnovic, M.ORCID logo & Yun, S. (2024). An adaptive approach for infinitely many-armed bandits under generalized rotting constraints. In Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J. & Zhang, C. (Eds.), Advances in Neural Information Processing Systems 37 (NeurIPS 2024 . Neural Information Processing Systems Foundation.
Copy

In this study, we consider the infinitely many-armed bandit problems in a rested rotting setting, where the mean reward of an arm may decrease with each pull, while otherwise, it remains unchanged. We explore two scenarios regarding the rotting of rewards: one in which the cumulative amount of rotting is bounded by VT, referred to as the slow-rotting case, and the other in which the cumulative number of rotting instances is bounded by ST, referred to as the abrupt-rotting case. To address the challenge posed by rotting rewards, we introduce an algorithm that utilizes UCB with an adaptive sliding window, designed to manage the bias and variance trade-off arising due to rotting rewards. Our proposed algorithm achieves tight regret bounds for both slow and abrupt rotting scenarios. Lastly, we demonstrate the performance of our algorithm using numerical experiments.

picture_as_pdf

subject
Published Version

Download

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export