Skip to main content
Log in

Learning neural network structures with ant colony algorithms

  • Published:
Swarm Intelligence Aims and scope Submit manuscript

Abstract

Ant colony optimization (ACO) has been successfully applied to classification, where the aim is to build a model that captures the relationships between the input attributes and the target class in a given domain’s dataset. The constructed classification model can then be used to predict the unknown class of a new pattern. While artificial neural networks are one of the most widely used models for pattern classification, their application is commonly restricted to fully connected three-layer topologies. In this paper, we present a new algorithm, ANN-Miner, which uses ACO to learn the structure of feed-forward neural networks. We report computational results on 40 benchmark datasets for several variations of the algorithm. Performance is compared to the standard three-layer structure trained with two different weight-learning algorithms (back propagation, and the \(\hbox {ACO}_{\mathbb {R}}\) algorithm), and also to a greedy algorithm for learning NN structures. A nonparametric Friedman test is used to determine statistical significance. In addition, we compare our proposed algorithm with NEAT, a prominent evolutionary algorithm for evolving neural networks, as well as three different well-known state-of-the-art classifiers, namely the C4.5 decision tree induction algorithm, the Ripper classification rule induction algorithm, and support vector machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Ang, J., Tan, K., & Al-Mamun, A. (2008). Training neural networks for classification using growth probability-based evolution. Neurocomputing, 71(16–18), 3493–3508.

    Article  Google Scholar 

  • Angeline, P., Saunders, G., & Pollack, J. (1994). An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5(1), 54–65.

    Article  Google Scholar 

  • Asuncion, A., Newman, D. (2007). University of California Irvine machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html.

  • Bishop, C. M. (2006). Pattern recognition and machine learning. New York, NY: Springer.

    MATH  Google Scholar 

  • Blum, C., & Socha, K. (2005). Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In Proceedings international conference on hybrid intelligent systems (HIS-2005) (pp. 233–238). Piscataway, NJ: IEEE Press.

  • Boryczka, U., & Kozak, J. (2010). Ant colony decision trees: A new method for constructing decision trees based on ant colony optimization. In Computational collective intelligence: Technologies and applications (ICCCI-2010), lecture notes in computer science (Vol. 6421, pp. 373–382). Berlin:Springer.

  • Boryczka, U., & Kozak, J. (2011). An adaptive discretization in the ACDT algorithm for continuous attributes. In Computational collective intelligence: Technology and applications (ICCCI-2011), lecture notes in computer science (Vol. 6923, pp. 475–484). Berlin:Springer.

  • Cai, X., Venayagamoorthy, G., & Wunsch, D. (2010). Evolutionary swarm neural network game engine for Capture Go. Neural Networks, 23(2), 295–305.

    Article  Google Scholar 

  • Cangelosi, A., Parisi, D., & Nolfi, S. (1994). Cell division and migration in a ‘genotype’ for neural networks. Network: Computation in Neural Systems, 5, 497–515.

    Article  MATH  Google Scholar 

  • Castillo, P., Merelo, J., Prieto, A., Rivas, V., & Romero, G. (2000). G-Prop: Global optimization of multilayer perceptrons using GAs. Neurocomputing, 35, 149–163.

    Article  MATH  Google Scholar 

  • Chan, K., Dillon, T., Chang, E., & Singh, J. (2013). Prediction of short-term traffic variables using intelligent swarm-based neural networks. IEEE Transactions on Control Systems Technology, 21(1), 263–274.

    Article  Google Scholar 

  • Chang, C. C., & Lin, C. J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3), 1–27.

    Article  Google Scholar 

  • Clune, J., Beckmann, B., Ofria, C., & Pennock, R. (2009). Evolving coordinated quadruped gaits with the HyperNEAT generative encoding. In Proceedings IEEE congress on evolutionary computation (CEC-2009) (pp. 2764–2771). Piscataway, NJ: IEEE Press.

  • Coshall, J. (2009). Combining volatility and smoothing forecasts of UK demand for international tourism. Tourism Management, 30(4), 495–511.

    Article  Google Scholar 

  • Cussat-Blanc, S., Harrington, K. & Pollack, J. (2015). Gene regulatory network evolution through augmenting topologies. IEEE Transactions on Evolutionary Computation.

  • Da, Y., & Xiurun, G. (2005). An improved PSO-based ANN with simulated annealing technique. Neurocomputing, 63, 527–533.

    Article  Google Scholar 

  • Dehuri, S., Roy, R., Cho, S. B., & Ghosh, A. (2012). An improved swarm optimized functional link artificial neural network (ISO-FLANN) for classification. Journal of Systems and Software, 85(6), 1333–1345.

    Article  Google Scholar 

  • Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1, 3–18.

    Article  Google Scholar 

  • Dorigo, M., & Stützle, T. (2004). Ant colony optimization. Cambridge, MA: MIT Press.

    Book  MATH  Google Scholar 

  • Dorigo, M., & Stützle, T. (2010). Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristics (pp. 227–263). New York, NY: Springer.

  • Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 26(1), 29–41.

    Article  Google Scholar 

  • Dorigo, M., Di Caro, G., & Gambardella, L. (1999). Ant algorithms for discrete optimization. Artificial Life, 5(2), 137–172.

    Article  Google Scholar 

  • Dutta, D., Roy, A., & Choudhury, K. (2013). Training artificial neural network using particle swarm optimization algorithm. International Journal of Advanced Research in Computer Science and Software Engineering, 3(3), 430–434.

    Google Scholar 

  • Fang, J., & Xi, Y. (1997). Neural network design based on evolutionary programming. Artificial Intelligence in Engineering, 11(2), 155–161.

    Article  Google Scholar 

  • Fernández-Delgado, M., Cernadas, E., Barro, S., & Amorim, D. (2014). Do we need hundreds of classifiers to solve real world classification problems? Journal of Machine Learning Research, 15(1), 3133–3181.

    MATH  MathSciNet  Google Scholar 

  • Floreano, D., Dürr, P., & Mattiussi, C. (2008). Neuroevolution: From architectures to learning. Evolutionary Intelligence, 1(1), 47–62.

    Article  Google Scholar 

  • Fogel, D. (1993). Using evolutionary programming to create neural networks that are capable of playing Tic-Tac-Toe. In Proceedings IEEE international conference on neural networks (ICNN-1993) (Vol. 2, pp. 875–880). Piscataway, NJ: IEEE Press.

  • Galea, M., & Shen, Q. (2006). Simultaneous ant colony optimization algorithms for learning linguistic fuzzy rules. In Swarm intelligence in data mining, studies in computational intelligence (Vol. 34, pp. 75–99). Berlin, Heidelberg: Springer.

  • Garro, B., Sossa, H., & Vazquez, R. (2011). Evolving neural networks: A comparison between differential evolution and particle swarm optimization. In Advances in swarm intelligence (ICSI-2011), lecture notes in computer science (Vol. 6728, pp. 447–454). Berlin: Springer.

  • Goldberg, D., & Richardson, J. (1987). Genetic algorithms with sharing for multimodal function optimization. In Proceedings international conference on genetic algorithms (ICGA-1987) (pp. 41–49). Hillsdale, NJ: L. Erlbaum Associates.

  • Gomez, F., & Miikkulainen, R. (1999). Solving non-Markovian control tasks with neuroevolution. In Proceedings international joint conference on artificial intelligence (IJCAI-1999) (Vol. 2, pp. 1356–1361). San Francisco, CA: Morgan Kaufmann.

  • Gutiérrez, P., Hervás-Martínez, C., & Martínez-Estudillo, F. (2011). Logistic regression by means of evolutionary radial basis function neural networks. IEEE Transactions on Neural Networks, 22(2), 246–263.

    Article  Google Scholar 

  • Han, J., Kamber, M., & Pei, J. (2011a). Data mining: Concepts and techniques. San Francisco, CA: Morgan Kaufmann.

    MATH  Google Scholar 

  • Han, M., Fan, J., & Wang, J. (2011b). A dynamic feedforward neural network based on Gaussian particle swarm optimization and its application for predictive control. IEEE Transactions on Neural Networks, 22(9), 1457–1468.

    Article  Google Scholar 

  • Haykin, S. (2008). Neural networks and learning machines. New York, NY: Prentice Hall.

    Google Scholar 

  • Hornby, G., & Pollack, J. (2002). Creating high-level components with a generative representation for body-brain evolution. Artificial Life, 8(3), 223–246.

    Article  Google Scholar 

  • Ilonen, J., Kamarainen, J. K., & Lampinen, J. (2003). Differential evolution training algorithm for feed-forward neural networks. Neural Processing Letters, 17(1), 93–105.

    Article  Google Scholar 

  • Jang, J. S., Sun, C. T., & Mizutani, E. (1997). Neuro-fuzzy and soft-computing: A computational approach to learning and machine intelligence. Upper Saddle River, NJ: Prentice Hall.

    Google Scholar 

  • Juang, C. F. (2004). A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 34(2), 997–1006.

    Article  Google Scholar 

  • Kang, D., Mathur, R., & Rao, S. (2010). Real-time bias-adjusted O3 and PM2.5 air quality index forecasts and their performance evaluations over the continental United States. Atmospheric Environment, 44, 2203–2212.

    Article  Google Scholar 

  • Karnik, N., Mendel, J., & Liang, Q. (1999). Type-2 fuzzy logic systems. IEEE Transactions on Fuzzy Systems, 7(6), 643–658.

    Article  Google Scholar 

  • Kodjabachian, J., & Meyer, J. A. (1998). Evolution and development of modular control architectures for 1D locomotion in six-legged animats. Connection Science, 10, 211–237.

    Article  Google Scholar 

  • Leung, F., Lam, H., Ling, S., & Tam, P. (2003). Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Transactions on Neural Networks, 14(1), 79–88.

    Article  Google Scholar 

  • Liao, T., Socha, K., Montes de Oca, M., Stützle, T., & Dorigo, M. (2014). Ant colony optimization for mixed-variable optimization problems. IEEE Transactions on Evolutionary Computation, 18(4), 503–518.

    Article  Google Scholar 

  • Lin, C. J., Chen, C. H., & Lin, C. T. (2009). A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications. IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews, 39(1), 55–68.

  • Liu, Y. P., Wu, M. G., & Qian, J. X. (2006). Evolving neural networks using the hybrid of ant colony optimization and BP algorithms. In Advances in neural networks (ISNN-2006), lecture notes in computer science (Vol. 3971, pp. 714–722). Berlin, Heidelberg: Springer.

  • Lu, W., Fan, H., & Lo, S. (2003). Application of evolutionary neural network method in predicting pollutant levels in downtown area of Hong Kong. Neurocomputing, 51, 387–400.

    Article  Google Scholar 

  • Martens, D., De Backer, M., Haesen, R., Vanthienen, J., Snoeck, M., & Baesens, B. (2007). Classification with ant colony optimization. IEEE Transactions on Evolutionary Computation, 11(5), 651–665.

    Article  Google Scholar 

  • Martens, D., Baesens, B., & Fawcett, T. (2011). Editorial survey: Swarm intelligence for data mining. Machine Learning, 82(1), 1–42.

    Article  MathSciNet  Google Scholar 

  • Martínez-Estudillo, A., Hervás-Martínez, C., Martínez-Estudillo, F., & García-Pedrajas, N. (2005). Hybridization of evolutionary algorithms and local search by means of a clustering method. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 36(3), 534–545.

    Article  MATH  Google Scholar 

  • McDonnel, J., & Waagen, D. (1993). Neural network structure design by evolutionary programming. In Proceedings second annual conference on evolutionary programming (pp. 79–89). La Jolla, CA: Evolutionary Programming Society.

  • Nawi, N., Khan, A., & Rehman, M. (2013). A new back-propagation neural network optimized with cuckoo search algorithm. In Computational science and its applications (ICCSA-2013), lecture notes in computer science (Vol. 7971, pp. 413–426). Berlin, Heidelberg: Springer.

  • Okada, H. (2014). Evolving fuzzy neural networks by particle swarm optimization with fuzzy genotype values. International Journal of Computing and Digital Systems, 3(3), 181–187.

    Article  Google Scholar 

  • Oong, T., & Isa, N. (2011). Adaptive evolutionary artificial neural networks for pattern classification. IEEE Transactions on Neural Networks, 22(11), 1823–1836.

    Article  Google Scholar 

  • Otero, F., & Freitas, A. (2013). Improving the interpretability of classification rules discovered by an ant colony algorithm. In Proceedings genetic and evolutionary computation conference (GECCO-2013) (pp. 73–80). New York, NY: ACM Press.

  • Otero, F., Freitas, A., & Johnson, C. (2009). Handling continuous attributes in ant colony classification algorithms. Proceedings IEEE symposium on computational intelligence and data mining (CIDM-2009) (pp. 225–231). Piscataway, NJ: IEEE Press.

  • Otero, F., Freitas, A., & Johnson, C. (2012). Inducing decision trees with an ant colony optimization algorithm. Applied Soft Computing, 12(11), 3615–3626.

    Article  Google Scholar 

  • Otero, F., Freitas, A., & Johnson, C. (2013). A new sequential covering strategy for inducing classification rules with ant colony algorithms. IEEE Transactions on Evolutionary Computation, 17(1), 64–76.

    Article  Google Scholar 

  • Palmes, P., Hayasaka, T., & Usui, S. (2005). Mutation-based genetic neural network. IEEE Transactions on Neural Networks, 16(3), 587–600.

    Article  Google Scholar 

  • Parpinelli, R. S., Lopes, H. S., & Freitas, A. (2002). Data mining with an ant colony optimization algorithm. IEEE Transactions on Evolutionary Computation, 6(4), 321–332.

    Article  MATH  Google Scholar 

  • Potter, M., & De Jong, K. (1995). Evolving neural networks with collaborative species. In Proceedings summer computer simulation conference (pp. 340–345). Ottawa, Canada: Society for Computer Simulation.

  • Risi, S. & Togelius, J. (2014). Neuroevolution in games: State of the art and open challenges. Tech. Rep. arXiv:1410.7326, Computing Research Repository (CoRR), http://arxiv.org/pdf/1410.7326.

  • Salama, K., & Abdelbar, A. (2014). A novel ant colony algorithm for building neural network topologies. In Swarm intelligence (ANTS-2014), lecture notes in computer science (Vol. 8667, pp. 1–12). Cham, Switzerland: Springer.

  • Salama, K., & Freitas, A. (2013). Extending the ABC-Miner Bayesian classification algorithm. In Nature inspired cooperative strategies for optimization (NICSO-2013), studies in computational intelligence (Vol. 512, pp. 1–12). Cham, Switzerland: Springer.

  • Salama, K., & Freitas, A. (2013b). Learning Bayesian network classifiers using ant colony optimization. Swarm Intelligence, 7(2–3), 229–254.

    Article  Google Scholar 

  • Salama, K., & Freitas, A. (2014a). ABC-Miner+: Constructing Markov blanket classifiers with ant colony algorithms. Memetic Computing, 6(3), 183–206.

    Article  Google Scholar 

  • Salama, K., & Freitas, A. (2014b). Classification with cluster-based Bayesian multi-nets using ant colony optimization. Swarm and Evolutionary Computation, 18, 54–70.

    Article  Google Scholar 

  • Salama, K., & Freitas, A. (2015). Ant colony algorithms for constructing Bayesian multi-net classifiers. Intelligent Data Analysis, 19(2), 233–257.

    Google Scholar 

  • Salama, K., & Otero, F. (2014). Learning multi-tree classification models with ant colony optimization. In Proceedings international conference on evolutionary computation theory and applications (ECTA-14) (pp. 38–48). Rome, Italy: Science and Technology Publications.

  • Salama, K., Abdelbar, A., & Freitas, A. (2011). Multiple pheromone types and other extensions to the ant-miner classification rule discovery algorithm. Swarm Intelligence, 5(3–4), 149–182.

    Article  Google Scholar 

  • Salama, K., Abdelbar, A., Otero, F., & Freitas, A. (2013). Utilizing multiple pheromones in an ant-based algorithm for continuous-attribute classification rule discovery. Applied Soft Computing, 13(1), 667–675.

  • Salerno, J. (1997). Using the particle swarm optimization technique to train a recurrent neural model. In Proceedings IEEE international conference on tools with artificial intelligence (pp. 45–49). Piscataway, NJ: IEEE Press.

  • Saravanan, N., & Fogel, D. (1995). Evolving neural control systems. IEEE Expert, 10(3), 23–27.

    Article  Google Scholar 

  • Schliebs, S., & Kasabov, N. (2013). Evolving spiking neural network: A survey. Evolving Systems, 4(2), 87–98.

    Article  Google Scholar 

  • Settles, M., Rodebaugh, B., & Soule, T. (2003). Comparison of genetic algorithm and particle swarm optimizer when evolving a recurrent neural network. In Genetic and evolutionary computation (GECCO-2003), lecture notes in computer science (Vol. 2723, pp. 148–149). Berlin, Heidelberg: Springer.

  • Socha, K., & Blum, C. (2007). An ant colony optimization algorithm for continuous optimization: Application to feed-forward neural network training. Neural Computing & Applications, 16, 235–247.

    Article  Google Scholar 

  • Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185, 1155–1173.

    Article  MathSciNet  MATH  Google Scholar 

  • Sohangir, S., Rahimi, S., & Gupta, B. (2014). Neuroevolutionary feature selection using NEAT. Journal of Software Engineering and Applications, 7, 562–570.

    Article  Google Scholar 

  • Song, Y., Chen, Z., & Yuan, Z. (2007). New chaotic PSO-based neural network predictive control for nonlinear process. IEEE Transactions on Neural Networks, 18(2), 595–601.

    Article  Google Scholar 

  • Stanley, K. (2007). Compositional pattern producing networks: A novel abstraction of development. Genetic Progamming and Evolvable Machines, 8(2), 131–162.

    Article  Google Scholar 

  • Stanley, K. (2015). The neuroevolution of augmenting topologies (NEAT) users page. http://www.cs.ucf.edu/~kstanley/neat.html.

  • Stanley, K., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2), 99–127.

    Article  Google Scholar 

  • Stanley, K., & Miikkulainen, R. (2004). Evolving a roving eye for Go. In Genetic and evolutionary computation (GECCO-2004), lecture notes in computer science (Vol. 3103, pp. 1226–1238). Berlin, Heidelberg: Springer.

  • Stanley, K., Bryant, B., & Miikkulainen, R. (2005). Evolving neural network agents in the NERO video game. In Proceedings IEEE symposium on computational intelligence and games (CIG-2005) (pp. 182–189). Piscataway, NJ: IEEE Press.

  • Stanley, K., Bryant, B., & Miikkulainen, R. (2005b). Real-time neuroevolution in the NERO video game. IEEE Transactions on Evolutionary Computation, 9(6), 653–668.

    Article  Google Scholar 

  • Stanley, K., D’Ambrosio, D., & Gauci, J. (2009). A hybercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2), 185–212.

    Article  Google Scholar 

  • Stützle, T., & Hoos, H. (2000). MAX–MIN ant system. Future Generation Computer Systems, 16, 889–914.

    Article  MATH  Google Scholar 

  • Tan, P. N., Steinbach, M., & Kumar, V. (2005). Introduction to data mining. Boston, MA: Addison Wesley.

    Google Scholar 

  • Valian, E., Mohanna, S., & Tavakoli, S. (2011). Improved cuckoo search algorithm for feedforward neural network training. International Journal of Artificial Intelligence and Applications, 2(3), 36–43.

    Article  Google Scholar 

  • Valsalam, V. K., & Miikkulainen, R. (2011). Evolving symmetry for modular system design. IEEE Transactions on Evolutionary Computation, 15(3), 368–386.

    Article  Google Scholar 

  • Valsalam, V. K., Hiller, J., MacCurdy, R., Lipson, H., & Miikkulainen, R. (2012). Constructing controllers for physical multilegged robots using the ENSO neuroevolution approach. Evolutionary Intelligence, 5(1), 45–56.

    Article  Google Scholar 

  • Werbos, P. J. (1994). The roots of backpropagation: From ordered derivatives to neural networks and political forecasting. New York, NY: Wiley-Interscience.

    Google Scholar 

  • Whiteson, S., Stone, P., Stanley, K., Miikkulainen, R., & Kohl, N. (2005). Automatic feature selection in neuroevolution. In Proceedings genetic and evolutionary computation conference (GECCO-2005) (pp. 1225–1232). New York, NY: ACM Press.

  • Whitley, D., Starkweather, T., & Bogart, C. (1990). Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Computing, 14(3), 347–361.

    Article  Google Scholar 

  • Whitley, D., Dominic, S., Das, R., & Anderson, C. (1993). Genetic reinforcement learning for neurocontrol problems. Machine Learning, 13(2–3), 259–284.

    Article  Google Scholar 

  • Witten, I. H., Frank, E., & Hall, M. A. (2010). Data mining: Practical machine learning tools and techniques. San Francisco, CA: Morgan Kaufmann.

    Google Scholar 

  • Yang, J. M., & Kao, C. Y. (2001). A robust evolutionary algorithm for training neural networks. Neural Computing and Applications, 10, 214–230.

    Article  MATH  Google Scholar 

  • Yao, X., & Liu, Y. (1997). A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8(3), 694–713.

    Article  MathSciNet  Google Scholar 

  • Yeh, C. Y., Jeng, W. R., & Lee, S. J. (2011). Data-based system modeling using a type-2 fuzzy neural network with a hybrid learning algorithm. IEEE Transactions on Neural Networks, 22(12), 2296–2309.

    Article  Google Scholar 

  • Yeh, W. C. (2013). New parameter-free simplified swarm optimization for artificial neural network training and its application in the prediction of time series. IEEE Transactions on Neural Networks and Learning Systems, 24(4), 661–665.

    Article  Google Scholar 

  • Yu, J., Xi, L., & Wang, S. (2007). An improved particle swarm optimization for evolving feedforward artificial neural networks. Neural Processing Letters, 26(3), 217–231.

    Article  Google Scholar 

  • Yu, J., Wang, S., & Xi, L. (2008). Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing, 71(4), 1054–1060.

    Article  Google Scholar 

Download references

Acknowledgments

The partial support of a grant from the Brandon University Research Council (BURC) is gratefully acknowledged. The authors would like to thank the anonymous reviewers and the associate editor for their insightful comments which have substantially improved the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashraf M. Abdelbar.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salama, K.M., Abdelbar, A.M. Learning neural network structures with ant colony algorithms. Swarm Intell 9, 229–265 (2015). https://doi.org/10.1007/s11721-015-0112-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11721-015-0112-z

Keywords

Navigation