Deep jump learning for off-policy evaluation in continuous treatment settings

Cai, H., Shi, C.ORCID logo, Song, R. & Lu, W. (2021). Deep jump learning for off-policy evaluation in continuous treatment settings. In Proceedings of the 35th Conference on Neural Information Processing Systems .
Copy

We consider off-policy evaluation (OPE) in continuous treatment settings, such as personalized dose-finding. In OPE, one aims to estimate the mean outcome under a new treatment decision rule using historical data generated by a different decision rule. Most existing works on OPE focus on discrete treatment settings. To handle continuous treatments, we develop a novel estimation method for OPE using deep jump learning. The key ingredient of our method lies in adaptively discretizing the treatment space using deep discretization, by leveraging deep learning and multi-scale change point detection. This allows us to apply existing OPE methods in discrete treatments to handle continuous treatments. Our method is further justified by theoretical results, simulations, and a real application to Warfarin Dosing.

picture_as_pdf

subject
Accepted Version

Download

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export