Log-sum-exponential estimator for off-policy evaluation and learning

Behnamnia, A., Aminian, G., Aghaei, A., Shi, C.ORCID logo, Tan, V. Y. F. & R. Rabiee, H. (2025). Log-sum-exponential estimator for off-policy evaluation and learning. Proceedings of Machine Learning Research, 267,
Copy

Off-policy learning and evaluation scenarios leverage logged bandit feedback datasets, which contain context, action, propensity score, and feedback for each data point. These scenarios face significant challenges due to high variance and poor performance with low-quality propensity scores and heavy-tailed reward distributions. We address these issues by introducing a novel estimator based on the log-sum-exponential (LSE) operator, which outperforms traditional inverse propensity score estimators. our LSE estimator demonstrates variance reduction and robustness under heavytailed conditions. For off-policy evaluation, we derive upper bounds on the estimator’s bias and variance. In the off-policy learning scenario, we establish bounds on the regret—the performance gap between our LSE estimator and the optimal policy—assuming bounded (1 + ϵ)-th moment of weighted reward. Notably, we achieve a convergence rate of O(n−ϵ/(1+ϵ)), where n is the number of training samples for the regret bounds and ϵ ∈ [0, 1]. Theoretical analysis is complemented by comprehensive empirical evaluations in both off-policy learning and evaluation scenarios, confirming the practical advantages of our approach. The code for our estimator is available at the following Link.

picture_as_pdf

subject
Accepted Version
Creative Commons: Attribution 4.0

Download

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export