Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation. This fixed ap-proach is inefficient because of a dynamic trade-off between cost and speed—larger batches are more costly, smaller batches lead to slower wall-clock run-times—and…