Pruning describes the idea of avoiding Overfitting by simplifying a learned concept, typically after the actual induction phase. The word originates from Decision Tree learning, where the idea of improving the decision tree by cutting some of its branches is related to the concept of pruning in gardening.
One can distinguish between Pre-Pruning, where pruning decisions are taken during the learning process, and Post-Pruning, where pruning occurs in a separate phase after the learning process. Pruning techniques are particularly important for state-of-the-art decision tree and Rule Learning algorithms.
The key idea of pruning is essentially the same as Regularization in statistical learning, with the key difference that regularization incorporates a complexity penalty directly into the learning heuristic, whereas pruning uses a separate pruning criterion or pruning algorithm.