We have proposed a distributed platform for machine learning without data accumulation. This method constructs feature models from distributed data and combines them to obtain the same level of performance as conventional methods based on data accumulation. In this paper, we propose and compare three methods for selecting and combining these combinable feature models named fog model; sequential model selection, a conventional method to select fog models in the order of node IDs; adaptive selection, a method to select only models to improve task performance; and similar model selection, a method to select models similar to the current model. Each method has different priorities in selecting models. As an evaluation, the proposed methods are compared with the method of previous research by simulation. The proposed method which gives priority to performance improvement with small number of combinations for the processing tasks comparison to previous one. The proposed method enables more efficient combinations of fog models with fewer models than previous research, and users are able to adaptively select fog models to improve overall performance or to prioritize the performance of specific features by selecting the target models.