Fault tolerant scheduling of tasks of two sizes under resource augmentation

  • Dariusz R. Kowalski
  • Prudence W. H. Wong
  • Elli Zavou
Article
  • 34 Downloads

Abstract

Guaranteeing the eventual execution of tasks in machines that are prone to unpredictable crashes and restarts may be challenging, but is also of high importance. Things become even more complicated when tasks arrive dynamically and have different computational demands, i.e., processing time (or sizes). In this paper, we focus on the online task scheduling in such systems, considering one machine and at least two different task sizes. More specifically, algorithms are designed for two different task sizes while the complementary bounds hold for any number of task sizes bigger than one. We look at the latency and 1-completed load competitiveness properties of deterministic scheduling algorithms under worst-case scenarios. For this, we assume an adversary, that controls the machine crashes and restarts as well as the task arrivals of the system, including their computational demands. More precisely, we investigate the effect of resource augmentation—in the form of processor speedup—in the machine’s performance, by looking at the two efficiency measures for different speedups. We first identify the threshold of the speedup under which competitiveness cannot be achieved by any deterministic algorithm, and above which there exists some deterministic algorithm that is competitive. We then propose an online algorithm, named \(\gamma \text{-Burst } \), that achieves both latency and 1-completed-load competitiveness when the speedup is over the threshold. This also proves that the threshold identified is also sufficient for competitiveness.

Keywords

Scheduling Online algorithms Task sizes Adversarial failures Resource augmentation Competitive analysis 

References

  1. Adiri, I., Bruno, J., Frostig, E., & Kan, A. H. G. R. (1989). Single machine flow-time scheduling with a single breakdown. Acta Informatica, 26(7), 679–696.CrossRefGoogle Scholar
  2. Ajtai, M., Aspnes, J., Dwork, C., & Waarts, O. (1994). A theory of competitive analysis for distributed algorithms. In Proceedings of 35th annual symposium on foundations of computer science, 1994 (pp. 401–411). IEEE.Google Scholar
  3. Anand, S., Garg, N., & Megow, N. (2011). Meeting deadlines: How much speed suffices? In Automata, languages and programming (pp. 232–243). Springer.Google Scholar
  4. Andrews, M., & Zhang, L. (2005). Scheduling over a time-varying user-dependent channel with applications to high-speed wireless data. Journal of the ACM (JACM), 52(5), 809–834.CrossRefGoogle Scholar
  5. Anta, A. F., Georgiou, C., Kowalski, D. R., & Zavou, E. (2015a). Competitive analysis of task scheduling algorithms on a fault-prone machine and the impact of resource augmentation. In International workshop on adaptive resource management and scheduling for cloud computing (pp. 1–16). Cham: Springer.Google Scholar
  6. Anta, A. F., Georgiou, C., Kowalski, D. R., & Zavou, E. (2015b). Online parallel scheduling of non-uniform tasks: Trading failures for energy. Theoretical Computer Science, 590, 129–146.CrossRefGoogle Scholar
  7. Anta, A. F., Georgiou, C., Kowalski, D. R., Widmer, J., & Zavou, E. (2016). Measuring the impact of adversarial errors on packet scheduling strategies. Journal of Scheduling, 19(2), 135–152.CrossRefGoogle Scholar
  8. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., et al. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50–58.CrossRefGoogle Scholar
  9. Bansal, N. (2003). Algorithms for flow time scheduling. Ph.D. thesis, IBM.Google Scholar
  10. Boyar, J., & Ellen, F. (2013). Bounds for scheduling jobs on grid processors. In Space-efficient data structures, streams, and algorithms (pp. 12–26). Berlin, Heidelberg: Springer.Google Scholar
  11. Epstein, L., & van Stee, R. (2007). Online bin packing with resource augmentation. Discrete Optimization, 4(34), 322–333.CrossRefGoogle Scholar
  12. Georgiou, C., & Kowalski, D. R. (2011). Performing dynamically injected tasks on processes prone to crashes and restarts. In Distributed Computing (pp. 165–180). Springer.Google Scholar
  13. Gharbi, A., & Haouari, M. (2005). Optimal parallel machines scheduling with availability constraints. Discrete Applied Mathematics, 148(1), 63–87.CrossRefGoogle Scholar
  14. Jurdzinski, T., Kowalski, D. R., & Lorys, K. (2014). Online packet scheduling under adversarial jamming. In International workshop on approximation and online algorithms (pp. 193–206). Cham: Springer.Google Scholar
  15. Kalyanasundaram, B., & Pruhs, K. (1995). Speed is as powerful as clairvoyance [scheduling problems]. In Proceedings of 36th annual symposium on foundations of computer science, 1995, (pp. 214–221). IEEE.Google Scholar
  16. Pruhs, K., Sgall, J., & Torng, E. (2004). Online scheduling. Online scheduling. In J. Y. T. Leung (Ed.), Handbook of scheduling: Algorithms, models, and performance analysis. CRC Press.Google Scholar
  17. Sanlaville, E., & Schmidt, G. (1998). Machine scheduling with availability constraints. Acta Informatica, 35(9), 795–811.CrossRefGoogle Scholar
  18. Sleator, D. D., & Tarjan, R. E. (1985). Amortized efficiency of list update and paging rules. Communications of the ACM, 28(2), 202–208.CrossRefGoogle Scholar
  19. van Stee, R. (2002). Online scheduling and bin packing. Ph.D. thesis.Google Scholar
  20. Zhang, Q., Cheng, L., & Boutaba, R. (2010). Cloud computing: State-of-the-art and research challenges. Journal of Internet Services and Applications, 1(1), 7–18.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  • Dariusz R. Kowalski
    • 1
  • Prudence W. H. Wong
    • 1
  • Elli Zavou
    • 2
  1. 1.University of LiverpoolLiverpoolUK
  2. 2.Universidad Carlos III de Madrid and IMDEA Networks InstituteLeganés, MadridSpain

Personalised recommendations