Special issue on ‘Neural networks for enhanced intelligence’
- 52 Downloads
Artificial neural network (ANN) models comprise networks of interconnected adaptive processing units. A very important feature of these networks is their adaptive nature, which enables the replacement of traditional ‘programming’ with the ability of ‘learning by examples’. This feature makes these models very attractive in applications where there is very little, or incomplete, understanding of the problem, but where sufficient data is available for network training. ANNs have been found to be useful in a wide variety of problems, including pattern classification, speech recognition, function approximation, image and data compression, clustering, forecasting, and prediction.
‘Intelligence’ has many definitions and one of the popular definitions is—the ability to comprehend; to understand and profit from experience. ANNs have been categorized as intelligent systems, which provide new technology in enhancing the intelligence of traditional applications. This special issue comprises eight papers, which present interesting research on using artificial neural networks and related techniques in enhancing the intelligence of different applications.
The first paper describes a new model for time series forecasting using the radial basis functions (RBFs). Initially, the new model is tested using Spanish bank and company stocks, to ascertain its ability of handling exogenous data with minimal preprocessing. A hybrid of the model with a genetic algorithm is then described, also highlighting the possibilities of parallel implementation.
The much-researched field of weather forecasting is discussed in the second paper, focusing on the applicability of Hopfield networks. The paper presents experimental results contrasting the Hopfield model with the multi-layer perceptron and radial basis function models. The paper describes the best results as coming from ensembles generated from a number of predicted output values of the models.
Protein interaction has been analysed and studied using neural networks in the recent past. The third paper presents a support vector machine approach in protein interaction analysis, which would be interesting as a comparison with neural-network-based techniques.
The fourth paper again takes a support-vector-machine-based approach, this time for an image retrieval application. The paper proposes a method for maintaining and incorporating prior knowledge in improving retrieval performance of future queries. The knowledge incorporation is managed using fuzzy classification techniques.
The fifth paper describes a multi-lingual web content filtering application. This has emerged as a very timely and relevant application, due to the wide popularity of the web and with multiple languages being used in web content. The paper proposes a technique based on fuzzy clustering and self-organized maps to disseminate multi-lingual web content for more effective information access.
Many researchers are currently working on techniques to keep track of web usage patterns, understand trends, and provide suggestions for users to help improve and speed up their web access. Paper number six proposes a mechanism that uses agent-based technologies and dynamic self-organizing maps (SOM) to combine usage patterns in multiple web server logs and structures of multiple web sites to provide a much more efficient and informative set of suggested links.
The seventh paper presents a method for learning word semantics and comprehension across short-term memory (STM). The research attempts to analyse and understand language disorders, and provides suggestions on possible improvements.
The final paper is based on a dynamic and structure-adapting enhancement of the self-organizing map (SOM) called the growing self-organizing map (GSOM). The GSOM has been used in several data mining, and related, applications, and this paper analyses the advantage of ‘growing’ compared with the traditional ‘fixed’ structure of the SOM.
We would like to thank the authors for their contributions. We are grateful to the reviewers including Drs. Leon Wang (New York Institute of Technology, USA), S. Chaudhri (University of Calcuttta, India), P.K. Mahanti (University of New Brunswick, Canada), A. Kelemen (University of Memphis, USA), R. Chau (Monash University, Australia), H. Yeh (Monash University, Australia), and Ravi Jain (The University of South Australia, Australia) for their time and expertise.
The editors are grateful to Professor John Macintyre (Editor-in-Chief of the Neural Computing and Applications Journal, Springer-Verlag, UK) for his vision and comments related to this special issue.