##
Single-pass Streaming Lower Bounds for Multi-armed Bandits Exploration

with Instance-sensitive Sample Complexity

Authors:
Sepehr Assadi, Chen Wang.

Abstract:
Motivated by applications to process massive datasets, we study streaming algorithms for pure exploration in Stochastic Multi Armed Bandits (MABs). This problem was first formulated by Assadi and Wang [STOC 2020] as follows: A collection of n arms with unknown rewards are arriving one by one in a stream, and the algorithm is only allowed to store a limited number of arms at any point. The goal is to find the arm with the largest reward while minimizing the number of arm pulls (sample complexity) and the maximum number of stored arms (space complexity). Assuming ∆_[2] is known, Assadi and Wang designed an algorithm that uses a memory of just one arm and still achieves the sample complexity of O(n/∆_[2]^2) which is worst-case optimal even for non-streaming algorithms; here ∆_[i] is the gap between the rewards of the best and the i-th best arms.

In this paper, we extended this line of work to stochastic MABs in the streaming model with the instance-sensitive sample complexity, similar to Karnin et.al. [ICML 2013] and Jamieson et.al. [COLT 2014] in the classical setting. We devise strong negative results under this setting: our results show that any streaming algorithm under a single pass has to use either asymptotically higher sample complexity than the instance-sensitive bound, or a memory of Ω(n) arms, even if the parameter ∆_[2] is known. In fact, the lower bound holds under much stronger assumptions, including the random order streams or the knowledge of all gap parameters {∆_[i]}_i>1. We complement our lower bounds by proposing a new algorithm that uses a memory of a single arm and achieves the instance-optimal sample complexity when all the strong assumptions hold simultaneously.

Our results are developed based on a novel arm trapping lemma. This generic complexity result shows that any algorithm to trap the index of the best arm among o(n) indices (but not necessarily to find it) has to use Θ(n/∆_[2]^2) sample complexity. This result is not restricted to the streaming setting, and to the best of our knowledge, this is the first result that captures the sample-space trade-off for ‘trapping’ arms in multi-armed bandits, and it can be of independent interest.

In this paper, we extended this line of work to stochastic MABs in the streaming model with the instance-sensitive sample complexity, similar to Karnin et.al. [ICML 2013] and Jamieson et.al. [COLT 2014] in the classical setting. We devise strong negative results under this setting: our results show that any streaming algorithm under a single pass has to use either asymptotically higher sample complexity than the instance-sensitive bound, or a memory of Ω(n) arms, even if the parameter ∆_[2] is known. In fact, the lower bound holds under much stronger assumptions, including the random order streams or the knowledge of all gap parameters {∆_[i]}_i>1. We complement our lower bounds by proposing a new algorithm that uses a memory of a single arm and achieves the instance-optimal sample complexity when all the strong assumptions hold simultaneously.

Our results are developed based on a novel arm trapping lemma. This generic complexity result shows that any algorithm to trap the index of the best arm among o(n) indices (but not necessarily to find it) has to use Θ(n/∆_[2]^2) sample complexity. This result is not restricted to the streaming setting, and to the best of our knowledge, this is the first result that captures the sample-space trade-off for ‘trapping’ arms in multi-armed bandits, and it can be of independent interest.