On regret-optimal cooperative nonstochastic multi-armed bandits
We consider the nonstochastic multi-agent multi-armed bandit problem with agents collaborating via a communication network with delays. We show a lower bound for individual regret of all agents. We show that with suitable regularizers and communication protocols, a collaborative multi-agent follow-the-regularized-leader (FTRL) algorithm has an individual regret upper bound that matches the lower bound up to a constant factor when the number of arms is large enough relative to degrees of agents in the communication graph. We also show that an FTRL algorithm with a suitable regularizer is regret optimal with respect to the scaling with the edge-delay parameter. We present numerical experiments validating our theoretical results and demonstrate cases when our algorithms outperform previously proposed algorithms.
| Item Type | Article |
|---|---|
| Keywords | bandit problem,multi-agent system,regret minimization |
| Departments |
LSE Statistics |
| DOI | 10.5555/3545946.3598780 |
| Date Deposited | 24 May 2024 11:36 |
| URI | https://researchonline.lse.ac.uk/id/eprint/123623 |