Two-way deconfounder for off-policy evaluation in causal reinforcement learning
Yu, S., Fang, S., Peng, R., Qi, Z., Zhou, F. & Shi, C.
(2024-12-10 - 2024-12-15)
Two-way deconfounder for off-policy evaluation in causal reinforcement learning
[Paper]. 38th Annual Conference on Neural Information Processing Systems, Vancouver Convention Center, Vancouver, Canada, CAN.
This paper studies off-policy evaluation (OPE) in the presence of unmeasured confounders. Inspired by the two-way fixed effects regression model widely used in the panel data literature, we propose a two-way unmeasured confounding assumption to model the system dynamics in causal reinforcement learning and develop a two-way deconfounder algorithm that devises a neural tensor network to simultaneously learn both the unmeasured confounders and the system dynamics, based on which a model-based estimator can be constructed for consistent policy value estimation. We illustrate the effectiveness of the proposed estimator through theoretical results and numerical experiments.
| Item Type | Conference or Workshop Item (Paper) |
|---|---|
| Copyright holders | © 2024 The Author(s) |
| Departments | LSE > Academic Departments > Statistics |
| Date Deposited | 21 Nov 2024 |
| Acceptance Date | 25 Sep 2024 |
| URI | https://researchonline.lse.ac.uk/id/eprint/126146 |
Explore Further
-
subject - Accepted Version
-
lock_clock - Restricted to Repository staff only until 1 January 2100
Request a copy
ORCID: https://orcid.org/0000-0001-7773-2099