Cluster Computing

, Volume 17, Issue 1, pp 61–78 | Cite as

Nephele streaming: stream processing under QoS constraints at scale

  • Björn Lohrmann
  • Daniel Warneke
  • Odej Kao


The ability to process large numbers of continuous data streams in a near-real-time fashion has become a crucial prerequisite for many scientific and industrial use cases in recent years. While the individual data streams are usually trivial to process, their aggregated data volumes easily exceed the scalability of traditional stream processing systems.

At the same time, massively-parallel data processing systems like MapReduce or Dryad currently enjoy a tremendous popularity for data-intensive applications and have proven to scale to large numbers of nodes. Many of these systems also provide streaming capabilities. However, unlike traditional stream processors, these systems have disregarded QoS requirements of prospective stream processing applications so far.

In this paper we address this gap. First, we analyze common design principles of today’s parallel data processing frameworks and identify those principles that provide degrees of freedom in trading off the QoS goals latency and throughput. Second, we propose a highly distributed scheme which allows these frameworks to detect violations of user-defined QoS constraints and optimize the job execution without manual interaction. As a proof of concept, we implemented our approach for our massively-parallel data processing framework Nephele and evaluated its effectiveness through a comparison with Hadoop Online.

For an example streaming application from the multimedia domain running on a cluster of 200 nodes, our approach improves the processing latency by a factor of at least 13 while preserving high data throughput when needed.


Massively-parallel Stream processing Distributed systems Latency QoS 


  1. 1.
    Hadoop online prototype—Google project hosting (2012).
  2. 2.—streaming live video broadcasts for everyone (2012).
  3. 3.
    Livestream—be there (2012).
  4. 4.
    Nathanmarz/storm—GitHub (2012).
  5. 5.
    Stratosphere—above the clouds (2012).
  6. 6.
    USTREAM, you’re on (2012).
  7. 7.
    Welcome to apache Hadoop! (2012). http://
  8. 8.
  9. 9.
    Abadi, D., Ahmad, Y., Balazinska, M., Cetintemel, U., Cherniack, M., Hwang, J., Lindner, W., Maskey, A., Rasin, A., Ryvkina, E., et al.: The design of the Borealis stream processing engine. In: Second Biennial Conference on Innovative Data Systems Research (CIDR ’05), pp. 277–289 (2005) Google Scholar
  10. 10.
    Abadi, D., Carney, D., Çetintemel, U., Cherniack, M., Convey, C., Lee, S., Stonebraker, M., Tatbul, N., Zdonik, S.: Aurora: a new model and architecture for data stream management. VLDB J. 12(2), 120–139 (2003) CrossRefGoogle Scholar
  11. 11.
    Aldinucci, M., Danelutto, M.: Stream parallel skeleton optimization. In: Proc. of the 11th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS ’99), pp. 955–962. IASTED/ACTA Press, Cambridge (1999). Google Scholar
  12. 12.
    Alexandrov, A., Ewen, S., Heimel, M., Hueske, F., Kao, O., Markl, V., Nijkamp, E., Warneke, D.: MapReduce and PACT—comparing data parallel programming models. In: Proc. of the 14th Conference on Database Systems for Business, Technology, and Web (BTW ’11), pp. 25–44. GI, Bonn (2011) Google Scholar
  13. 13.
    Babu, S., Widom, J.: Continuous queries over data streams. SIGMOD Rec. 30, 109–120 (2001) CrossRefGoogle Scholar
  14. 14.
    Battré, D., Ewen, S., Hueske, F., Kao, O., Markl, V., Warneke, D.: Nephele/PACTs: a programming model and execution framework for web-scale analytical processing. In: Proc. of the 1st ACM Symposium on Cloud Computing (SoCC ’10), pp. 119–130. ACM, New York (2010) CrossRefGoogle Scholar
  15. 15.
    Battré, D., Hovestadt, M., Lohrmann, B., Stanik, A., Warneke, D.: Detecting bottlenecks in parallel DAG-based data flow programs. In: Proc. of the 2010 IEEE Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS ’10), pp. 1–10. IEEE Press, New York (2010) CrossRefGoogle Scholar
  16. 16.
    Borkar, V., Carey, M., Grover, R., Onose, N., Vernica, R.: Hyracks: a flexible and extensible foundation for data-intensive computing. In: Proc. of the 2011 IEEE 27th International Conference on Data Engineering (ICDE ’11), pp. 1151–1162. IEEE Press, New York (2011). doi: 10.1109/ICDE.2011.5767921 CrossRefGoogle Scholar
  17. 17.
    Cherniack, M., Balakrishnan, H., Balazinska, M., Carney, D., Cetintemel, U., Xing, Y., Zdonik, S.: Scalable distributed stream processing. In: Proc. of the First Biennial Conference on Innovative Data Systems Research (CIDR ’03), pp. 257–268 (2003) Google Scholar
  18. 18.
    Condie, T., Conway, N., Alvaro, P., Hellerstein, J.M., Elmeleegy, K., Sears, R.: MapReduce online. In: Proc. of the 7th USENIX Conference on Networked Systems Design and Implementation (NSDI ’10), USENIX Association, Berkeley (2010). p. 21 Google Scholar
  19. 19.
    Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008) CrossRefGoogle Scholar
  20. 20.
    Elnozahy, E.N.M., Alvisi, L., Wang, Y.M., Johnson, D.B.: A survey of rollback-recovery protocols in message-passing systems. ACM Comput. Surv. 34(3), 375–408 (2002). doi: 10.1145/568522.568525 CrossRefGoogle Scholar
  21. 21.
    Isard, M., Budiu, M., Yu, Y., Birrell, A., Fetterly, D.: Dryad: distributed data-parallel programs from sequential building blocks. Oper. Syst. Rev. 41(3), 59–72 (2007) CrossRefGoogle Scholar
  22. 22.
    Lam, W., Liu, L., Prasad, S., Rajaraman, A., Vacheri, Z., Doan, A.: Muppet: mapreduce-style processing of fast data. Proc. VLDB Endow. 5(12), 1814–1825 (2012) Google Scholar
  23. 23.
    Li, B., Mazur, E., Diao, Y., McGregor, A., Shenoy, P.: A platform for scalable one-pass analytics using mapreduce. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data (SIGMOD ’11), pp. 985–996. ACM, New York (2011) CrossRefGoogle Scholar
  24. 24.
    Lohrmann, B., Warneke, D., Kao, O.: Massively-parallel stream processing under QoS constraints with Nephele. In: Proceedings of the 21st International Symposium on High-Performance Parallel and Distributed Computing (HPDC ’12), pp. 271–282. ACM, New York (2012) CrossRefGoogle Scholar
  25. 25.
    Motwani, R., Widom, J., Arasu, A., Babcock, B., Babu, S., Datar, M., Manku, G., Olston, C., Rosenstein, J., Varma, R.: Query processing, approximation, and resource management in a data stream management system. In: First Biennial Conference on Innovative Data Systems Research (CIDR ’03), pp. 245–256 (2003) Google Scholar
  26. 26.
    Murray, D., Schwarzkopf, M., Smowton, C., Smith, S., Madhavapeddy, A., Hand, S.: CIEL: a universal execution engine for distributed data-flow computing. In: Proc. of the 8th USENIX Conference on Networked Systems Design and Implementation (NSDI ’11), USENIX Association, Berkeley (2011). p. 9 Google Scholar
  27. 27.
    Neumeyer, L., Robbins, B., Nair, A., Kesari, A.: S4: distributed stream computing platform. In: 2010 IEEE International Conference on Data Mining Workshops (ICDMW ’10), pp. 170–177. IEEE Press, New York (2010) CrossRefGoogle Scholar
  28. 28.
    Warneke, D., Kao, O.: Exploiting dynamic resource allocation for efficient parallel data processing in the cloud. IEEE Trans. Parallel Distrib. Syst. 22(6), 985–997 (2011) CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Technische Universität BerlinBerlinGermany
  2. 2.International Computer Science Institute (ICSI)BerkeleyUSA

Personalised recommendations