Skip to main content
Log in

Dynamic Auto Reconfiguration of Operator Placement in Wireless Distributed Stream Processing Systems

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

The data is generated at significant speed and volume by devices in real-time. The data generation and the growth of fog and edge computing infrastructure have led to the noteworthy development of the corresponding distributed stream processing systems (DSPS). A DSPS application has Quality of Service (QoS) restrictions in terms of resource cost and time. The physical resources are distributed and heterogeneous. The resource-constrained scheduling problem has considerable implications on the performance of the system and QoS violations. The static deployment of applications in fog or edge scenario has to be monitored continuously for runtime issues, and actions have to be taken accordingly. In this paper, we propose an adaptation capability with reinforcement learning techniques to an existing stream processing framework scheduler. This functionality enables the scheduler to make decisions on its own when the system model or knowledge of the environment is not known upfront. The reinforcement learning methods adapt to the system when the system model for different states is not available. We consider applications whose workload cannot be characterized or predicted. In such applications, predictions of input load are not helpful for online scheduling. The Q-Learning based online scheduler learns to make dynamic scaling decisions at runtime when there is performance degradation. We validated the proposed approach with real-time and benchmark applications on a DSPS cluster. We obtained an average of 6% reduction in the response time and a 15% increase in the throughput when the Q Learning module is employed in the scheduler.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Availability of Data and Material

The authors have included all the supporting data and material details in this work.

References

  1. Liu, X., & Buyya, R. (2020). Resource management and scheduling in distributed stream processing systems. ACM Computing Surveys, 53(3), 1–41.

    Article  Google Scholar 

  2. Cardellini, V., Grassi, V., Lo Presti, F., & Nardelli, M. (2016). Optimal operator placement for distributed stream processing applications. In DEBS 2016 - Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems (pp. 69–80).

  3. Buddhika, T., Stern, R., Lindburg, K., Ericson, K., & Pallickara, S. (2017). Online scheduling and interference alleviation for low-latency, high-throughput processing of data streams. IEEE Transactions on Parallel and Distributed Systems, 28(12), 3553–3569.

    Article  Google Scholar 

  4. Aniello, L., Baldoni, R. & Querzoni, L. (2013). Adaptive online scheduling in storm. In DEBS 2013 - Proceedings of the ACM International Conference on Distributed Event-Based Systems (pp. 207–218).

  5. Sun, D., Gao, S., Liu, X., Li, F., Zheng, X., & Buyya, R. (2019). State and runtime-aware scheduling in elastic stream computing systems. Future Generation Computer Systems, 97, 194–209.

    Article  Google Scholar 

  6. Muhammad, A., & Aleem, M. (2020). A3-Storm: topology-, traffic-, and resource-aware storm scheduler for heterogeneous clusters (Vol. 0123456789). New York: Springer.

    Google Scholar 

  7. Mao, H., Schwarzkopf, M., He, H., & Alizadeh, M. (2019). Towards safe online reinforcement learning in computer systems. In 33rd conference on neural information processing systems (NeurIPS 2019).

  8. Vaquero, L. M., & Cuadrado, F. (2018). Auto-tuning distributed stream processing systems using reinforcement learning. arXiv preprint arXiv:CoRR.

  9. Jena, U. K., Das, P. K., & Kabat, M. R. (2020). Hybridization of meta-heuristic algorithm for load balancing in cloud computing environment. The Journal of King Saud University Computer and Information Sciences, 40, 1–11.

    Google Scholar 

  10. Temesgene, D. A., Miozzo, M., & Dini, P. (2019). Dynamic control of functional splits for energy harvesting virtual small cells: A distributed reinforcement learning approach. Computer Communications, 148(August), 48–61.

    Article  Google Scholar 

  11. Moghadam, M. H., & Babamir, S. M. (2018). Makespan reduction for dynamic workloads in cluster-based data grids using reinforcement-learning based scheduling. Journal of Computer Science, 24, 402–412.

    Article  MathSciNet  Google Scholar 

  12. Orhean, A. I., Pop, F., & Raicu, I. (2018). New scheduling approach using reinforcement learning for heterogeneous distributed systems. Journal of Parallel and Distributed Computing, 117, 292–302.

    Article  Google Scholar 

  13. Zhong, J. H., Cui, D. L., Peng, Z. P., Li, Q. R., & He, J. G. (2018). Multi workflow fair scheduling scheme research based on reinforcement learning. Procedia Computer Science, 154, 117–123.

    Article  Google Scholar 

  14. Correa-Jullian, C., LópezDroguett, E., & Cardemil, J. M. (2020). Operation scheduling in a solar thermal system: A reinforcement learning-based framework. Applied Energy, 268, 114943.

    Article  Google Scholar 

  15. Melnik, M., & Nasonov, D. (2019). Workflow scheduling using neural networks and reinforcement learning. Procedia Computer Science, 156, 29–36.

    Article  Google Scholar 

  16. Hu, L., Liu, Z., Hu, W., Wang, Y., Tan, J., & Wu, F. (2020). Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network. Journal of Manufacturing Systems, 55, 1–14.

    Article  Google Scholar 

  17. Gazori, P., Rahbari, D., & Nickray, M. (2019). Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach. Future Generation Computer Systems, 110, 1098–1115.

    Article  Google Scholar 

  18. Cardellini, V., Lo Presti, F., Nardelli, M., & Russo Russo, G. (2018). Decentralized self-adaptation for elastic data stream processing. Future Generation Computer Systems, 87, 171–185.

    Article  Google Scholar 

  19. Mao, H., Schwarzkopf, M., Venkatakrishnan, S. B., Meng, Z., & Alizadeh, M. (2019). Learning scheduling algorithms for data processing clusters. In Proceedings of the ACM special interest group on data communication (pp. 270–288).

  20. Heinze, T., Pappalardo, V., Jerzak, Z., & Fetzer, C. (2014). Auto-scaling techniques for elastic data stream processing. In Proceedings of the 8th ACM international conference on distributed event-based systems (DEBS’14) (pp. 318–321). 

  21. Li, T., Xu, Z., Tang, J., & Wang, Y. (2018). Model-free control for distributed stream data processing using deep reinforcement learning. Proceedings of the VLDB Endowment, 11(6), 705–718.

    Article  Google Scholar 

  22. Comput, J. P. D., Tong, Z., Xiao, Z., Li, K., & Li, K. (2014). Proactive scheduling in distributed computing: A reinforcement learning approach. Journal of Parallel and Distributed Computing, 74(7), 2662–2672.

    Article  Google Scholar 

  23. Sarathi, P., Nath, S., De, D., & Maiti, B. (2020). Sustainable computing : Informatics and systems RL-sleep : Temperature adaptive sleep scheduling using reinforcement learning for sustainable connectivity in wireless sensor networks. Sustainable Computing: Informatics and Systems, 26, 100380.

    Google Scholar 

  24. Da Silva Veith, A., De Assunçao, M. D., & Lefevre, L. (2019). Monte-Carlo Tree Search and Reinforcement Learning for Reconfiguring Data Stream Processing on Edge Computing. In 2019 31st IEEE international symposium on computer architecture and high performance computing (SBAC-PAD) (pp. 48–55).

  25. Manogaran, G., Shakeel, P. M., Baskar, S., Hsu, C. H., Kadry, S. N., et al. (2020). FDM: Fuzzy-optimized data management technique for improving big data analytics. IEEE Transactions on Fuzzy Systems. https://doi.org/10.1109/TFUZZ.2020.3016346.

    Article  Google Scholar 

  26. Balaanand, M., Karthikeyan, N., & Karthik, S. (2019). Envisioning social media information for big data using big vision schemes in wireless environment. Wireless Personal Communications, 109, 777–796. https://doi.org/10.1007/s11277-019-06590-w.

    Article  Google Scholar 

  27. Apache, Apache Storm. [Online]. Available: https://storm.apache.org/. Accessed: 09-Jul-2020.

  28. Apache, “Apache Spark Streaming.” [Online]. Available: https://spark.apache.org/streaming/. Accessed: 09-Jul-2020.

  29. IBM Streams. [Online]. Available: https://ibmstreams.github.io/. Accessed: 09-Jul-2020.

Download references

Funding

The authors did not receive support from any organization for the submitted work.

Financial Interests: The authors have no relevant financial or non-financial interests to disclose.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. Sornalakshmi.

Ethics declarations

Conflict of interests

The authors have no conflicts of interest to declare.

Code Availability

The authors have included all the code and software details in this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sornalakshmi, K., Vadivu, G. Dynamic Auto Reconfiguration of Operator Placement in Wireless Distributed Stream Processing Systems. Wireless Pers Commun 127, 293–318 (2022). https://doi.org/10.1007/s11277-021-08264-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-021-08264-y

Keywords

Navigation