After the big success of cloud computing technology in industry, edge computing is now extending the various cloud services from central server farms to the venues close to applications. The cloud/edge computing infrastructure can provide powerful, flexible, and pervasive computation and storage capabilities which have been fostering many big data applications. The advances in cloud and edge computing technologies, which have been changing the way of software development and deployment, have also been influencing the design and implementation of artificial intelligence algorithms and models. A typical example is the recent emergence of federative machine learning models, which takes the computing infrastructure into consideration when design the algorithms for the privacy-preservation reasons. Actually, the increasing computing and storage power are one of the key enablers for deep learning models, which have drawn the attention of most if not all researchers in the related areas. Meanwhile, the advances and progress have been widely adopted in cloud/edge computing platforms and applications, given the high complexity in their development, management and deployment. For example, deep learning techniques are used to monitor the health of the cloud/edge platforms which can consists of a huge number of computing servers, routers, sensing devices, and actuators. It can be seen that these two lines of technology have achieved a stage of convergency through taking both the computing infrastructure and the computation models into consideration to achieve a whole solution to the big data challenges. This has led to a plethora of successful big data applications in the domains like smart cities, Internet of things, e-commerce, driverless cars, etc. As such, it is the high time to investigate the challenges and opportunities brought by the convergency of AI and the cloud/edge computing technologies, as well as the benefits and consequences to the big data applications. Some examples are how such convergency incurs security and privacy concerns in machine/deep learning models, and how to design a more suitable cloud/edge computing platformer deployment for a specific AI algorithms or models.
This special issue aims to bring academic researchers and industry practitioners together to share and discuss the challenges, recent advances, and future trends of the convergency of AI and cloud/edge computing for big data applications. With some submissions recommended from EAI CloudComp 2019 conference papers, this special issue has attracted 14 submissions in total. We have only selected 5 high quality submissions for inclusion in this special issue after revisions based on a rigorous review process and the recommendations from expert reviewers in the relevant areas. This leads to a reasonable acceptance rate around 35%. Following are the summaries of the 5 selected articles for this special issue.
Internet of vehicles (IoV) is gradually combined with connected autonomous vehicles (CAV), which accelerates the development of CAV. The first article, “Tasks Offloading for Connected Autonomous Vehicles in Edge Computing”, authored by Qi Wu, et al., proposed proposed a vehicle tasks offloading method (VTO) where the vehicle tasks offloading problem is formulated as a multi-objective optimization problem. A vehicle tasks offloading problem requires load balance of edge servers to be maintained with the minimum total time cost. The authors design a multi-objective optimization evolutionary algorithm basing on improving the strength pare to evolutionary algorithm (SPEA2) and technique for order preference by similarity to ideal solution (TOPSIS) and multiple criteria decision making (MCDM). Through theoretical analysis and experimental evaluation, the results shows that the performance of VTO is effective and efficient.
The second article titled “AI for Online Customer Service: Intent Recognition and Slot Filling based on Deep Learning Technology”, authored by Yirui Wu et al., proposed to implement an intelligent online custom service system with power of cloud/edge computing and deep learning technologies. cloud/edge computing provides flexible, pervasive computation and storage capabilities to support variant applications, and deep learning models could comprehend text inputs by consuming computing and storage resource. To prevent error accumulation caused by modeling two subtasks independently, the authors jointly model the subtasks intent recognition and slot filling in an end-to-end neural network. The method extracts distinctive features with a dual structure to take full advantage of interactive and level information between two sub-tasks. An attention scheme is introduced to enhance feature representation by involving sentence-level context information. Experiment results prove the effectiveness of the proposed method by obtaining accurate and promising results.
Edge computing is adopted as a feasible solution to the problem of limited computing ability in mobile devices, and provides services for IoT devices in different geographical locations. In the third article with the title “Optimal IoT Service Offloading with Uncertainty in SDN-Based Mobile Edge Computing”, authored by Huizhen Hao et al., proposed an optimal IoT service offloading (OSO) method with uncertainty, where the completion time and load balance variance are two optimization goals for developing offloading strategies. Then, the non-dominated sorting genetic algorithm-II (NSGA-II) is fully investigated to improve the completion time and load balance variance. Moreover, the optimal strategy is selected by using Simple Additive Weighting (SAW) and Multiple Criteria Decision Making (MCDW). The experimental evaluation demonstrates the OSO's superiority.
With the rapid development of cloud computing technology, online recommendation services have been a new trend for providing specific information on topics such as lifestyle, fashion news and a variety of other activities. The fourth article titled “A Time-aware Hybrid Algorithm for Online Recommendation Services”, authored by Fuzhen Sun, et al., proposed a time-sensitive and stable interest similarity model to calculate the similarity of user interest. Furthermore, a novel algorithm termed as item popularity similarity with time sensitivity (IPSTS) is proposed by combining the above two kinds of similarity model. They are assigned with different weight factors to balance their impacts. Comparative experiments are conducted to evaluate the proposed approach and the traditional collaborative filtering algorithms, the results indicate that IPSTS can effectively reduce the value of Mean Absolute Error (MAE) and Root Mean Square Error (RMSE).
Hadoop is an open source from Apache with a distributed file system and MapReduce distributed computing framework. The current Apache 2.0 license agreement supports on-demand payment by consumers. The last article titled “Near-Data Prediction Based Speculative Optimization in a Distribution Environment”, authored by Qi Liu, et al., proposed a speculative execution (SE) optimization strategy based on near data prediction, which analyzes the prediction of real-time task execution information to predict the required running time, select backup nodes based on actual requirements and approximate data to make the SE strategy achieve the best performance. Experiments prove that in a heterogeneous Hadoop environment, the optimization strategy can effectively improve the effectiveness and accuracy of various tasks and enhance the performance of cloud computing. Platform performance can benefits consumers better than before.