Deeply-debiased off-policy interval estimation

Shi, C.ORCID logo, Wan, R., Chernozhukov, V. & Song, R. (2021-07-18 - 2021-07-24) Deeply-debiased off-policy interval estimation [Paper]. International Conference on Machine Learning, Online.
Copy

Off-policy evaluation learns a target policy’s value with a historical dataset generated by a different behavior policy. In addition to a point estimate, many applications would benefit significantly from having a confidence interval (CI) that quantifies the uncertainty of the point estimate. In this paper, we propose a novel deeply-debiasing procedure to construct an efficient, robust, and flexible CI on a target policy’s value. Our method is justified by theoretical results and numerical experiments. A Python implementation of the proposed procedure is available at https://github.com/RunzheStat/D2OPE.

picture_as_pdf

subject
Accepted Version

Download

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export