Nonparametric bandits with covariates
Abstract
We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature.
Download PDF
Citation
Zeevi, Assaf, and Philippe Rigolet. "Nonparametric bandits with covariates." In Proceedings of the 23rd conference on learning theory (COLT), 54–66. Ed. A. T. Kalai and M. Mohri. New York: Association for Computing Machinery, July 2010.
Each author name for a Columbia Business School faculty member is linked to a faculty research page, which lists additional publications by that faculty member.
Each topic is linked to an index of publications on that topic.