1 Introduction to the Domain of Research

Human beings have attempted to reduce their workloads as much as possible since antiquity. Tools and machines have been invented to help humans decrease the physical efforts needed and dedicate their time to more important tasks such as developing and controlling these instruments. To this end, one of the major milestones in the history of humanity was the rise of computation (Minsky 1967). The term Computer Science refers to the set of scientific and technical knowledge that makes automatic information processing possible using computers (Ellis and Nutt 1980).

There was immense development in Computer Science in the second half of the 20th century with the rise of different technologies (integrated circuits, Internet, etc.) which fostered the expansion of areas as important as computation, software development, communications, etc. (Zelkowitz 1978; Aljawarneh 2015; Aljawarneh et al. 2016, 2017a, b, 2018).

Nowadays, Computer Science offers human beings an immense range of possibilities for development thanks to the rise and consolidation of concepts like the Internet of Things, Cloud Computing, Big Data, cryptocurrency, etc. (Bani Yassein et al. 2017; Aljawarneh and Yassein 2016; Radhakrishna et al. 2016a, b, 2017a, b, c, d, 2018a, b, c). In order to adequately understand the current state of Computer Science, it is important to be aware of the contributions made in history to this field by such important scientists like Leibniz, Babbage, Boole, Gödel, Turing, von Neumann, Shannon, etc. (Anguera de Sojo et al. 2013; Ares et al. 2018).

Computer Science today is at a crucial point in time as there are a number of challenges arising which will mark the development of the field in the near future and which require profound reflection among the scientific community (Gucwa and Cheng 2017; Pourmajidi et al. 2017; Švábenský and Vykopal 2018).

In this introductory paper, a related work includes the summaries of the selected papers have been explored in Sect. 2. Section 3 includes a set of recommendations for researchers, practitioners and scholars to improve their research quality in this area. In Sect. 4, the conclusions have been dawn.

2 Related Work: The Selected Papers

The goal of this special issue is precisely to analyse the current situation of Computer Science from an epistemological, two-way perspective: looking backward as today’s Computer Science is the product of all the progress that has been made throughout history; and looking forward as Computer Science is destined to be very much present in the future development of the human being.

The original suggested list of topics included: the milestones in the history of Computer Science and its disciplines, the role of scientists in the development of Computer Science (Turing, Boole, Leibniz, Pascal, Von Neumann, …), the social impact of Computer Science today and its applications in medicine, education, social relations, etc., prospective work, theories and paradigms that may be important in years to come, future challenges in the area, ethics in Computer Science, plagiarism and nostrification.

The special issue includes 10 papers, which have been subject to a rigorous peer-review process. Each paper has been reviewed by two independent experts. The rest of this section includes a summary of the selected papers.

In the paper “Hybrid Efficient Genetic Algorithm for Big Data Feature Selection Problems”, Mohammed et al. propose a new gene-weighted mechanism that can adaptively classify the features into strong relative features, weak or redundant features, and unstable features during the evolution of the algorithm. Based on this classification, the proposed algorithm gives the strong features high priority and the weak features less priority when generating new candidate solutions. In the same time, the proposed algorithm tries to more concentrate on unstable features that sometimes appear and sometimes disappear from the best solutions of the population. The performance of proposed algorithms is investigated by using different datasets and feature selection algorithms. The results show that our proposed algorithms can outperform the other feature selection algorithms and effectively enhance the classification performance over the tested datasets.

Nagaraja et al. paper titled “UTTAMA—An Intrusion Detection System Based on Feature Clustering and Feature Transformation” aims to come up with a new membership function to carry similarity computation that can be helpful for addressing feature dimensionality issues. In principle, this work is aimed at introducing a novel membership function that can help to achieve better classification accuracies and eventually lead to better intrusion and anomaly detection. Experiments are performed on KDD dataset with 41 attributes and also KDD dataset with 19 attributes. Recent approaches CANN and CLAPP have showed new approaches for intrusion detection. The proposed classifier is named as UTTAMA. UTTAMA performed better to both CANN and CLAPP approaches w.r.t overall classifier accuracy. Another promising outcome achieved using UTTAMA is the U2R and R2L attack accuracies. The importance of proposed approach is that the accuracy achieved using proposed approach outperforms CLAPP, CANN, SVM, KNN and other existing classifiers.

In the paper “Krishna Sudarsana - A Z-Space Interest Measure for Mining Similarity Profiled Temporal Association Patterns”, Radhakrishna et al. offer a novel z-space based interest measure named as KRISHNA SUDARSANA for time-stamped transaction databases by extending interest measure SRIHASS proposed in previous research. KRISHNA SUDARSANA is designed by using the product based fuzzy Gaussian membership function and performs similarity computations in z-space to determine the similarity degree between any two temporal patterns. The interest measure is designed by considering z-values between z = 0 and z = 3.09. Applying the KRISHNA SUDARSANA requires moving the threshold value given by user to a different transformation space (z-space) which is a defined as a function of standard deviation. In addition to proposing interest measure, new expressions for standard deviation and equivalent z-space threshold are derived for similarity computations. For experimental evaluation, they considered Naïve, Sequential and Spamine algorithms that applies Euclidean distance function and compared performance of these three approaches to Z-SPAMINE algorithm that uses KRISHNA SUDARSANA by choosing various test cases. Experiment results proved the performance of the proposed approach is better to Sequential approach that uses snapshot database scan strategy and Spamine approach that uses lattice based database scan strategy.

The paper titled “Securing NEMO using a Bilinear Pairing-Based 3-Party Key exchange (3PKE-NEMO) in Heterogeneous Networks”, by Reddicherla et al., proposes a secure architecture to provide authentication and confidentiality at each level of communication using a 3-Party Key Exchange called 3-PKE-NEMO with the help of Bilinear Pairing Theorem. Handoff delay is reduced without compromising security strength. The whole experimentation of this proposed work is carried out using NS2 simulation tool and authentication proof is given using BAN logic between all the nodes in NEMO. The proposed security architecture is compared with related existing solutions and found to be more secure.

In this research, “A Similarity Function for Feature Pattern Clustering and High Dimensional Text Document Classification”, Kumar et al. propose a novel similarity function for feature pattern clustering and high dimensional text classification. The similarity function proposed is used to carry supervised learning based dimensionality reduction. The important feature of this work is that the word distribution before and after dimensionality reduction is the same. Experiment results prove the proposed approach achieves dimensionality reduction, retains the word distribution and obtained better classification accuracies compared to other measures.

In Al-hayali et al. paper, entitled “Increasing Energy Efficiency in Wireless Sensor Networks Using GA-ANFIS to Choose a Cluster Head and Assess Routing and Weighted Trusts to Demodulate Attacker Nodes”, a genetic algorithm (GA) and an adaptive neuro fuzzy inference system were used to diminish the energy waste of sensors. Weighted trust evaluation was applied to search for harmful nodes in the network to prolong the lifespan of WSNs. A low-energy adaptive clustering hierarchy method was used to analyze the results. It was discovered that searching for harmful nodes with GA-ANFIS using weighted trust evaluation significantly increased the lifespan of WSNs. For evaluation of the proposed method they used the mean of energy of all sensors against of the round, data packets received in base station, minimum energy versus rounds and number of alive sensors versus rounds. Also, in this paper they compared the proposed method results with LEACH, LEACH-DT, Random, SIF and GA-Fuzzy methods. As results the proposed method has high life time than other methods. A representation of the overall system was implemented using MATLAB software.

Aljawarneh et al. paper, titled “Ultimate-Unearthing Latent Time Profiled Temporal Associations”, introduces the pioneering work ULTIMATE that uses a novel tree structure generated using similarity measure ASTRA and applies support and distance bound computations for pruning temporal patterns. Experiment results showed that ULTIMATE outperforms SEQUENTIAL, SPAMINE, G-SPAMINE, MASTER, VRKSHA, GANDIVA algorithms.

In their paper “Hybrid Real-Time Protection System for Online Social Networks”, Bani Yassein et al. propose a comprehensive, user-level, proactive and real-time OSNs’ protection system, called Hybrid Real-time Social Networks Protector (HRSP). HRSP has three components; a user-level security protocol and two classification models. The protocol defines a structure for OSN’s cryptographic services, including encryption, access control and users’ authentication. The classification models employ machine learning, black lists, white lists and users’ feedback, in order to classify URLs into: Benign, Risk and Inappropriate classes, and contents into: Benign, Hate speech and Inappropriate classes. They constructed two data sets of 150,000 URLs and 22,000 tweets to build and test the two classification models. Results show an overall accuracy of 93.2% for the URL model and 84.4% for the content model, while the protocol implementation produces compatible size and time overhead. The components of HRSP are integrated and have compatible design with OSN platforms.

The paper “Turing: The Great Unknown”, by Anguera et al., looks at all the important advancements carried out by Alan Turing thanks to his famous machines, that gave rise to evolutionary computations and genetic programming as well as connectionism and learning. Authors show that Turing was an exceptional mathematician with a peculiar and fascinating personality and yet he remains largely unknown. In fact, according to authors, he might be considered the father of the von Neumann architecture computer and the pioneer of Artificial Intelligence.

In “A new approach to computing using reports and holons: towards a theory of computing science”, De la Peña et al. formally establish the fundamental elements and postulates that make up a first attempt at a Computer Science theory. The fundamental elements of this theory are informons and holons, and it is formulated as three postulates. The evaluation of the theory indicates that, apart from affording the characteristics demanded of any scientific theory, it is of use instrumentally, as well as a general and comprehensive theory of computation.

3 Discussions and Recommendations

A number of recommendations have been suggested to improve the research in this field as follows:

  • Structured computing needs further to be discussed by showing the recent trends which stated in manuscripts 1–4 and 9–10.

  • Software Science is still evolving since it depends on emerging areas such as software engineering, data science and big data

  • Social network systems need to be make sure their integrity, security and scalability to users as discussed in Manuscript 8.

  • Data mining and deep learning approaches proved their importance to the IDS, security tools and information retrieval systems as shown in manuscript 3–7.

4 Conclusions

In this special issue, 10 selected papers have been included that present important advancements in the area of Foundations of Software Science and Computation Structures. The selected papers present interesting studies about the development of Computer Science, works about promising existing technologies and outstanding research about theories and methods that will play an important role in the future of this discipline.

As guest editors, we are aware of the fact that this issue cannot completely cover all the advancements in this area, but we expect that this special issue can stimulate further research in the domain of Foundations of Software and Computer Science.