"Moore's Law" [1] observes that available processing power doubles every 2 years or so. What does this mean for systems biology problems? The "ultimate laptop", a machine of approximate size and weight to a modern laptop performing at physically possible limits [2], would run at 1051 FLOPS (c.f. 1010 FLOPS for a current processor), based on the maximum information carrying capacity of 1 kg of matter, known as "Bremermann's Limit" [3]. 1040 FLOPS will be the effective ultimate limit, for various technical reasons. Applying this theory to a real biological example, 193 genes are known to be involved in spore formation and degradation in Bacillus[4]. If this network is represented as 193 binary nodes, in the simplest possible star topology (i.e. all genes are either on or off and all have an equal weight in determining the state of the pathway and there are no interactions between genes), exhaustive exploration of such a state space requires 2193 or 1058 operations. A current processor requires about 1048 seconds for this, or about 4 × 1040 years. The ultimate laptop manages in just under 4 × 1010 years, or slightly under 3 times the age of the universe. Parallelisation (the ultimate grid of ultimate laptops) is one solution, but grids of over 1012 nodes are also technically infeasible, for various reasons.

One alternative is simply to study only tractable problems. On a single 1010 FLOPS processor over one month, one could exhaustively explore a state space of 2.7 × 1016 possibilities. A binary star-topology network of 54 genes is thus at the limit of brute force tractability. Even with the ultimate laptop, the corresponding figure is 154 genes. It is a sobering thought that a transition from present day computing capacity to ultimate computing capacity will only increase the tractable size of a binary star network by a mere 3-fold.

Shortcuts therefore need to be found. Computer scientists have long developed various heuristics for solving combinatorially explosive problems, such as chess or natural language parsing. What these have in common is that they use rules to prune the number of possible states. For instance, Hofmann et al.[5] attempt to define components of networks that behave as modules. The fact that real biological networks do not behave as independent star topologies may be the key to the reduction of complexity. In a star topology, every node has an independent effect on the outcome, but where cycles exist in the network topology, it may be possible to treat whole segments of the network as fuzzy individuals. Certain parts of networks could then be conveniently "black boxed", losing total information in the process but critically pruning the entire tree of possibilities. Another source of potential ideas for this activity could be found in the field of quantitative genetics. Animal and plant breeders have long wrestled with systems in which quantitative traits such as fertility, body mass or milk production are controlled by large numbers of small-effect genes (e.g. [6]). Although there are 193 genes that have some effect on sporulation in Bacillus, the subset that account for the largest variance may be rather small. One possible methodology, combining genomics with modelling at the functional level is given by Tardieu [7].

The best way to ensure that we can deal with the large and complex is to concentrate on the small scale. By exhaustive understanding of small networks, we can learn to abstract them as modules, feeding them as approximations into larger networks. As well as the computer science literature on complexity reduction, systems biologists should also explore the work of quantitative geneticists, who have several decades of experience in methods for reliable estimation of phenotypic output from complex and poorly understood genetic input. Moore is Less, but Less is More.