Outcome-based reinforcement learning to predict the future
Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has been an effective approach for improving Large Language Models’ reasoning in domains such as coding and mathematics. Here, we apply RLVR methods towards forecasting future real-world events – a challenging task for RL due to the very noisy (and delayed) outcomes involved. Using a novel dataset of recent questions from a prediction market, and accompanying relevant news headlines, we show that a compact (14B) reasoning model can be trained to match or surpass the predictive accuracy of frontier models like o1, while greatly improving probabilistic calibration. The model’s performance is also practically meaningful: in a Polymarket trading simulation, we estimate that its bets would have yielded a return on investment of over 10% across all questions in the test set. We detail and compare approaches used in training our model, including augmenting our training-data with synthetic prediction questions, guardrails for learning stability, and median prediction sampling at inference-time.1.
| Item Type | Article |
|---|---|
| Copyright holders | © 2025 The Author(s) |
| Departments | LSE > Academic Departments > Management |
| Date Deposited | 19 February 2026 |
| Acceptance Date | 1 January 2021 |
| URI | https://researchonline.lse.ac.uk/id/eprint/137351 |
Explore Further
- https://www.scopus.com/pages/publications/105025256056 (Scopus publication)
