Learning by changing connection weights only is time-consuming and does not always work. Freedom to modify network structure is also needed. Grow-and-Learn (GAL) is a new algorithm that is able to quantize vectors as members of categories in an incremental fashion. When a new vector is encountered, it is tested as in nearest neighbor search and if it is not already quantized correctly, unit and links are added to accommodate this additional requirement. Thus network when learning, grows if and when necessary. As the structure of the resulting network in such a learning phase is dependent on the order of encountering the vectors, a second phase is added to eliminate old, no-longer necessary associations. In this phase, the network is closed to the environment and the input patterns are generated by the network itself during which relevance of units are computed and those who are not vital are removed. Simulation results when applied to character recognition is promising. Physiological plausibility and how the idea may be extended to unsupervised learning is discussed.