We propose and implement a Bayesian optimal design procedure. Our procedure takes as its primitives a class of parametric models of strategic behavior, a class of games (experimental designs), and priors on the behavioral parameters. We select the experimental design that maximizes the information from the experiment. We sequentially sample with the given design and models until only one of the models has viable posterior odds. A model which has low posterior odds in a small collection of models will have an even lower posterior odds when compared to a larger class, and hence we can dismiss it. The procedure can be used sequentially by introducing new models and comparing them to the models that survived earlier rounds of experiments. The emphasis is not on running as many experiments as possible, but rather on choosing experimental designs to distinguish between models in the shortest possible time period. We illustrate this procedure with a simple experimental game with one-sided incomplete information.