Stochastic online learning algorithms typically exhibit slow convergence speed, but their solutions of moderate accuracy often suffice in practice. Since the outcomes of these algorithms are random variables, not only their accuracy but also their probability of achieving a certain accuracy, called confidence, is important. We show that a rather simple aggregation of outcomes from parallel dual averaging runs can provide a solution with improved confidence, and it can be controlled by the number of runs, independently of the length of learning processes.
Keywords
- Empirical Risk
- Subgradient Method
- Dual Average
- Simple Aggregation
- Machine Learn Research
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.