The complexity of modern data analysis techniques and the increasing amounts of data gushing out from neuroscientific experiments place new demands on the computing infrastructure required for data processing. The needs exceed the speed and memory constraints of a classical serial program design and require scientists to parallelize their analysis processes on distributed computer systems. In this chapter we explore, step by step, how to transform a typical data analysis program into a parallelized application. On the conceptual level, we demonstrate how to identify those parts of a serial program best suited for parallel execution. On the level of the practical implementation, we introduce four methods that assist in managing and distributing the parallelized code. By combining high-level scientific programming languages with modern techniques for job control and metaprogramming, no knowledge of system-level parallelization and the hardware architecture is required. We describe the solutions in a general fashion to facilitate the transfer of insights to the specific software and computer system environment of a particular laboratory.