Skip to main content

Load-Aware Shedding in Stream Processing Systems

  • Chapter
  • First Online:
Transactions on Large-Scale Data- and Knowledge-Centered Systems XLVI

Abstract

Distributed stream processing systems are today gaining momentum as a tool to perform analytics on continuous data streams. Load shedding is a technique used to handle unpredictable spikes in the input load whenever available computing resources are not adequately provisioned. In this paper, we propose Load-Aware Shedding (LAS), a novel load shedding solution that, unlike previous works, does not rely neither on a pre-defined cost model nor on any assumption on the tuple execution duration. Leveraging sketches, LAS efficiently estimates the execution duration of each tuple with small error bounds and uses this knowledge to proactively shed input streams at any operator to limiting queuing latencies while dropping as few tuples as possible. We provide a theoretical analysis proving that LAS is an \(({\varepsilon }, \delta )\)-approximation of the optimal online load shedder. Furthermore, through an extensive practical evaluation based on simulations and a prototype, we evaluate its impact on stream processing applications.

This work has been partially funded by the MIUR SCN-00064 project RoMA and by Sapienza University of Rome through the project RM11916B75A3293D.

A preliminary short version of this work appeared in the Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems.

N. Rivetti—Independent researcher.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the data streaming literature, the frequency is the number of occurrences not divided by time, which differs from the classical (physics) definition  [17].

  2. 2.

    This is not the only possible definition of the load shedding problem. Other variants are briefly discussed in Sect. 6.

  3. 3.

    This correction factor derives from the fact that \(\hat{w}(t)\) is a \((\varepsilon ,\delta )\)-approximation of w(t) as shown in Sect. 4.

  4. 4.

    For readability reasons, proofs of these theorems are available in Appendix A.

References

  1. Abadi, D.J., et al.: Aurora: a new model and architecture for data stream management. Int. J. Very Large Data Bases (VLDB J.) 12(2), 120–139 (2003)

    Article  Google Scholar 

  2. Babcock, B., Datar, M., Motwani, R.: Load shedding for aggregation queries over data streams. In: Proceedings of the 20th International Conference on Data Engineering (ICDE 2004), pp. 350–361. IEEE (2004)

    Google Scholar 

  3. Borkowski, M., Hochreiner, C., Schulte, S.: Minimizing cost by reducing scaling operations in distributed stream processing. Proc. VLDB Endow. 12(7), 724–737 (2019)

    Article  Google Scholar 

  4. Carter, J.L., Wegman, M.N.: Universal classes of hash functions. J. Comput. Syst. Sci. 18, 143–154 (1979)

    Article  MathSciNet  Google Scholar 

  5. Cormode., G.: Sketch techniques for approximate query processing. In: Synposes for Approximate Query Processing: Samples, Histograms, Wavelets and Sketches, Foundations and Trends in Databases. NOW Publishers (2011)

    Google Scholar 

  6. Cormode, G., Muthukrishnan, S.: An improved data stream summary: the count-min sketch and its applications. J. Algorithms 55, 58–75 (2005)

    Article  MathSciNet  Google Scholar 

  7. Dobra, A., Garofalakis, M., Gehrke, J., Rastogi, R.: Sketch-based multi-query processing over data streams. In: Bertino, E., et al. (eds.) EDBT 2004. LNCS, vol. 2992, pp. 551–568. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24741-8_32

    Chapter  Google Scholar 

  8. Gedik, B., Wu, K., Yu, P.S., Liu, L.: GrubJoin: an adaptive, multi-way, windowed stream join with time correlation-aware CPU load shedding. IEEE Trans. Knowl. Data Eng. 19(10), 1363–1380 (2007)

    Article  Google Scholar 

  9. He, Y., Barman, S., Naughton, J.F.: On load shedding in complex event processing. arXiv preprint arXiv:1312.4283 (2013)

  10. He, Y., Barman, S., Naughton, J.F.: On load shedding in complex event processing. In: Proceedings of the 17th International Conference on Database Theory (ICDT 2014), pp. 213–224 (2014). OpenProceedings.org

  11. Heinze, T., Aniello, L., Querzoni, L., Jerzak, Z.: Cloud-based data stream processing. In: Proceedings of the 8th ACM International Conference on Distributed Event-Based Systems (DEBS 2014), pp. 238–245. ACM (2014)

    Google Scholar 

  12. Ilarri, S., Wolfson, O., Mena, E., Illarramendi, A., Sistla, P.: A query processor for prediction-based monitoring of data streams. In: Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology, EDBT 2009, pp. 415–426. Association for Computing Machinery, New York (2009)

    Google Scholar 

  13. Kalyvianaki, E., Charalambous, T., Fiscato, M., Pietzuch, P.: Overload management in data stream processing systems with latency guarantees. In: 7th IEEE International Workshop on Feedback Computing (Feedback Computing 2012) (2012)

    Google Scholar 

  14. Kalyvianaki, E., Fiscato, M., Salonidis, T., Pietzuch, P.: THEMIS: fairness in federated stream processing under overload. In: Proceedings of the 2016 International Conference on Management of Data, pp. 541–553. ACM (2016)

    Google Scholar 

  15. Kammoun, A.: Enhancing stream processing and complex event processing systems. Ph.D. thesis, Université Jean Monnet, Saint-Etienne (2019)

    Google Scholar 

  16. Katsipoulakis, N.R., Labrinidis, A., Chrysanthis, P.K.: Concept-driven load shedding: reducing size and error of voluminous and variable data streams. In: 2018 IEEE International Conference on Big Data (Big Data), pp. 418–427 (2018)

    Google Scholar 

  17. Muthukrishnan, S.: Data Streams: Algorithms and Applications. Now Publishers Inc. (2005)

    Google Scholar 

  18. Olston, C., Jiang, J., Widom, J.: Adaptive filters for continuous queries over distributed data streams. In: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, SIGMOD 2003, pp. 563–574. Association for Computing Machinery, New York (2003)

    Google Scholar 

  19. Quoc, D.L., Chen, R., Bhatotia, P., Fetzer, C., Hilt, V., Strufe, T.: StreamApprox: approximate computing for stream analytics. In: Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference, Middleware 2017, pp. 185–197. Association for Computing Machinery, New York (2017)

    Google Scholar 

  20. Reiss, F., Hellerstein, J.M.: Data triage: an adaptive architecture for load shedding in TelegraphCQ. In: Proceedings of the 21st International Conference on Data Engineering (ICDE 2005), pp. 155–156. IEEE (2005)

    Google Scholar 

  21. Rivetti, N., Busnel, Y., Mostefaoui, A.: Efficiently summarizing data streams over sliding windows. In: Proceedings of the 14th IEEE International Symposium on Network Computing and Applications (NCA 2015), Boston, USA, Best Student Paper Award, September 2015

    Google Scholar 

  22. Slo, A., Bhowmik, S., Flaig, A., Rothermel, K.: pSPICE: partial match shedding for complex event processing. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 372–382. IEEE (2019)

    Google Scholar 

  23. Slo, A., Bhowmik, S., Rothermel, K.: eSPICE: probabilistic load shedding from input event streams in complex event processing. In: Proceedings of the 20th International Middleware Conference, pp. 215–227 (2019)

    Google Scholar 

  24. Stanoi, I., Mihaila, G., Palpanas, T., Lang, C.: WhiteWater: distributed processing of fast streams. IEEE Trans. Knowl. Data Eng. 19(9), 1214–1226 (2007)

    Article  Google Scholar 

  25. Tatbul, N., Çetintemel, U., Zdonik, S.: Staying fit: efficient load shedding techniques for distributed stream processing. In: Proceedings of the 33rd International Conference on Very Large Data Bases, pp. 159–170. VLDB Endowment (2007)

    Google Scholar 

  26. Tatbul, N., Çetintemel, U., Zdonik, S., Cherniack, M., Stonebraker, M.: Load shedding in a data stream manager. In: Proceedings of the 29th International Conference on Very Large Data Bases (VLDB 2003), pp. 309–320. VLDB Endowment (2003)

    Google Scholar 

  27. The Apache Software Foundation. Apache Storm. http://storm.apache.org

  28. Tok, W.H., Bressan, S., Lee., M.-L.: A stratified approach to progressive approximate joins. In: Proceedings of the 11th International Conference on Extending Database Technology: Advances in Database Technology, EDBT 2008, pp. 582–593. Association for Computing Machinery, New York (2008)

    Google Scholar 

  29. Tu, Y.-C., Liu, S., Prabhakar, S., Yao, B.: Load shedding in stream databases: a control-based approach. In: Proceedings of the 32nd International Conference on Very Large Data Bases (VLDB 2006), pp. 787–798. VLDB Endowment (2006)

    Google Scholar 

  30. Zhang, Y., Huang, C., Huang, C.: A novel adaptive load shedding scheme for data stream processing. In: Future Generation Communication and Networking (FGCN 2007), pp. 378–384. IEEE (2007)

    Google Scholar 

  31. Zhao, B., Viet Hung, N.Q., Weidlich, M.: Load shedding for complex event processing: input-based and state-based techniques. In: 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, pp. 1093–1104 (2020). https://doi.org/10.1109/ICDE48307.2020.00099

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leonardo Querzoni .

Editor information

Editors and Affiliations

A Theoretical Analysis

A Theoretical Analysis

Data streaming algorithms strongly rely on pseudo-random functions that map elements of the stream to uniformly distributed image values to keep the essential information of the input stream, regardless of the stream elements frequency distribution.

This appendix extends with the proofs the theoretical analysis of the quality of the shedding performed by LAS in two steps provided in Sect. 4 as well as the complexities presented in Sect. 3.

First we study the correctness and optimality of the shedding algorithm, under full knowledge assumption (i.e., the shedding strategy is aware of the exact execution duration \(w_t\) for each tuple t). Then, in Appendix A.3, we provide a probabilistic analysis of the mechanism that LAS uses to estimate the tuple execution durations.

1.1 A.1 Time, Space and Communication Complexities

In this section we provide the proofs of the time, space and communication complexities presented in Sect. 3.

Theorem 1 [Time complexity of LAS]. For each tuple read from the input stream, the time complexity of LAS for the operator and the load shedder is \(\mathcal {O}(\log 1/\delta )\).

Proof

By Listing 3.1, for each tuple read from the input stream, the algorithm increments an entry per row of both the \({\mathcal {F}}\) and \({\mathcal {W}}\) matrices. Since each has \(\log 1/\delta \) rows, the resulting update time complexity is \(\mathcal {O}(\log 1/\delta )\). By Listing 3.2, for each submitted tuple, the scheduler has to retrieve the estimated execution duration for the submitted tuple. This operation requires to read entry per row of both the \({\mathcal {F}}\) and \({\mathcal {W}}\) matrices. Since each has \(\log 1/\delta \) rows, the resulting query time complexity is \(\mathcal {O}(\log 1/\delta )\).

Theorem 2 [Space Complexity of LAS]. The space complexity of LAS for the operator and load shedder is \(\mathcal {O} \left( \frac{1}{\varepsilon }\log \frac{1}{\delta }(\log m + \log n)\right) \) bits.

Proof

The operator stores two matrices of size \(\log (\frac{1}{\delta }) \times \frac{e}{\varepsilon } \) of counters of size \(\log m\). In addition, it also stores a hash function with a domain of size n. Then the space complexity of LAS on the operator is \(\mathcal {O} \left( \frac{1}{\varepsilon }\log \frac{1}{\delta }(\log m + \log n)\right) \) bits. The load shedder stores the same matrices, as well as a scalar. Then the space complexity of LAS on the load shedder is also \(\mathcal {O} \left( \frac{1}{\varepsilon }\log \frac{1}{\delta }(\log m + \log n)\right) \) bits.

Theorem 3 [Communication complexity of LAS]. The communication complexity of LAS is of \(\mathcal {O} \left( \frac{m}{N} \right) \) messages and \(\mathcal {O} \left( \frac{m}{N} \left( \frac{1}{\varepsilon }\log \frac{1}{\delta }(\log m + \log n) + \log m \right) \right) \) bits.

Proof

After executing N tuples, the operator may send the \({\mathcal {F}}\) and \({\mathcal {W}}\) matrices to the load shedder.

This generates a communication cost of \(\mathcal {O} \left( \frac{m}{N} \frac{1}{\varepsilon }\log \frac{1}{\delta }(\log m + \log n)\right) \) bits via \(\mathcal {O} \left( \frac{m}{N} \right) \) messages. When the load shedder receives these matrices, the synchronization mechanism kicks in and triggers a round trip communication (half of which is piggybacked by the tuples) with the operator. The communication cost of the synchronization mechanism is \(\mathcal {O} \left( \frac{m}{N} \right) \) messages and \(\mathcal {O} \left( \frac{m}{N} \log m \right) \) bits.

Note that the communication cost is low with respect to the stream size since the window size N should be chosen such that \(N \gg 1\) (e.g., in our tests we have \(N=1024\)).

1.2 A.2 Correctness of LAS

We suppose that tuples cannot be preempted, that is they must be processed uninterruptedly on the available operator instance. As mentioned before, in this analysis we assume that the execution duration w(t) is known for each tuple t. Finally, given our system model, we consider the problem of minimizing \({d}\), the number of dropped tuples, while guaranteeing that the average queuing latency \({\overline{Q}}(t)\) will be upper-bounded by \(\tau \), \(\forall t \in \sigma \). The solution must work online, thus the decision of enqueueing or dropping a tuple has to be made only resorting to knowledge about tuples received so far in the stream.

Let OPT be the online algorithm that provides the optimal solution to Problem 1. We denote with \({\mathcal {D}}^{\sigma }_{\text{ O }PT}\) (resp. \({d}^{\sigma }_{\text{ O }PT}\)) the set of dropped tuple indices (resp. the number of dropped tuples) produced by the OPT algorithm fed by stream \(\sigma \) (cf., Sect. 2). We also denote with \({d}^{\sigma }_{\text {LAS}}\) the number of dropped tuples produced by LAS introduced in Sect. 3.3 fed with the same stream \(\sigma \).

Theorem 4 [Correctness and Optimality of LAS]. For any \(\sigma \), we have \({d}^{\sigma }_{\text {LAS}} = {d}^{\sigma }_{\text{ O }PT}\) and \(\forall t\in \sigma , {\overline{Q}}^{\sigma }_{\text {LAS}}(t) \le \tau \).

Proof

Given a stream \(\sigma \), consider the sets of indices of tuples dropped by respectively OPT and LAS, namely \({\mathcal {D}}^{\sigma }_{\text{ O }PT}\) and \({\mathcal {D}}^{\sigma }_{\text {LAS}}\). Below, we prove by contradiction that \({d}^{\sigma }_{\text {LAS}} = {d}^{\sigma }_{\text{ O }PT}\).

Assume that \({d}^{\sigma }_{\text {LAS}} > {d}^{\sigma }_{\text{ O }PT}\). Without loss of generality, we denote \(i_1, \ldots , i_{{d}^{\sigma }_{\text{ L }AS}}\) the ordered indices in \({\mathcal {D}}^{\sigma }_{\text{ L }AS}\), and \(j_1, \ldots , j_{{d}^{\sigma }_{\text{ O }PT}}\) the ordered indices in \({\mathcal {D}}^{\sigma }_{\text{ O }PT}\). Let us define a as the largest natural integer such that \(\forall \ell \le a, i_\ell = j_\ell \) (i.e., \(i_1 = j_1, \ldots , i_a = j_a\)). Thus, we have \(i_{a+1} \ne j_{a+1}\).

  • Assume that \(i_{a+1} < j_{a+1}\). Then, according to Sect. 3.3, the \(i_{a+1}\)-th tuple of \(\sigma \) has been dropped by LAS as the method Check returned true. Thus, as \(i_{a+1}\notin {\mathcal {D}}^{\sigma }_{\text{ O }PT}\), the OPT run has enqueued this tuple violating the constraint \(\tau \). But this is in contradiction with the definition of OPT.

  • Assume now that \(i_{a+1} > j_{a+1}\). The fact that LAS does not drop the \(j_{a+1}\) tuple means that Check returns false, thus that tuple does not violate the constraint on \(\tau \). However, as OPT is optimal, it may drop some tuples for which Check is false, just because this allows it to drop an overall lower number of tuples. Therefore, if it drops this \(j_{a+1}\) tuple, it means that OPT knows the future evolution of the stream and takes a decision on this knowledge. But, by assumption, OPT is an online algorithm, and the contradiction follows.

Then, we have that \(i_{a+1} = j_{a+1}\). By induction, we iterate this reasoning for all the remaining indices from \(a+1\) to \({d}^{\sigma }_{\text{ O }PT}\). We then obtain that \({\mathcal {D}}^{\sigma }_{\text{ O }PT}\subseteq {\mathcal {D}}^{\sigma }_{\text{ L }AS}\).

As by assumption \({d}^{\sigma }_{\text{ O }PT}<{d}^{\sigma }_{\text{ L }AS}\), we have that \(\exists \ell \in {\mathcal {D}}^{\sigma }_{\text{ L }AS}\setminus {\mathcal {D}}^{\sigma }_{\text{ O }PT}\) such that \(\ell \) has been dropped by LAS. This means that, with the same tuple index prefix shared by OPT and LAS, the method Check returned true when evaluated on \(\ell \), and OPT would violate the condition on \(\tau \) by enqueuing it. That leads to a contradiction. Then, \({\mathcal {D}}^{\sigma }_{\text{ L }AS}\setminus {\mathcal {D}}^{\sigma }_{\text{ O }PT}=\emptyset \), and \({d}^{\sigma }_{\text{ O }PT} = {d}^{\sigma }_{\text{ L }AS}\).

Furthermore, by construction, LAS never enqueues a tuple that violates the condition on \(\tau \) because Check would return true.

Consequently, \(\forall t\in \sigma , {\overline{Q}}^{\sigma }_{\text{ L }AS}(t) \le \tau \), which concludes the proof.

1.3 A.3 Execution Time Estimation

In this section, we analyze the approximation made on execution duration w(t) for each tuple t when the assumption of full knowledge is removed. LAS uses two matrices, \({\mathcal {F}}\) and \({\mathcal {W}}\), to estimate the execution time w(t) of each tuple submitted to the operator. By the Count Min  sketch algorithm (cf., Sect. 3.2) and Listing 3.1, we have that for any \(t \in [n]\) and for each row \(i\in [r]\),

$$\begin{aligned} {\mathcal {F}}[i][h_i(t)](m)&= \sum _{u=1}^n f_u 1_{\{h_i(u)=h_i(t)\}} \\&= f_t + \sum _{u=1,u\ne t}^n f_u 1_{\{h_i(u)=h_i(t)\}}. \end{aligned}$$

and

$${\mathcal {W}}[i][h_i(t)](m) = f_t w_t + \sum _{u=1,u\ne t}^n f_u w_u 1_{\{h_i(u)=h_i(t)\}},$$

Let us denote respectively by \(w_{\min }\) and \(w_{\max }\) the minimum and the maximum execution time of the items. We have trivially

$$w_{\min } \le \frac{{\mathcal {W}}[i][h_i(t)]}{{\mathcal {F}}[i][h_i(t)]} \le w_{\max }.$$

We define \(S = \sum _{\ell =1}^n w_\ell \). We then have

Theorem 5

$$\begin{aligned}&\mathbb {E}\{{\mathcal {W}}[i][h_i(t)]/{\mathcal {F}}[i][h_i(t)]\} \\&=\frac{S-w_t}{n-1} - \frac{k(S-nw_t)}{n(n-1)} \left( 1 - \left( 1 -\frac{1}{k}\right) ^{n}\right) . \end{aligned}$$

It important to note that this result does not depend on m.

Proof

For any \(t=1,\ldots ,n\), \(\ell =0,\ldots ,n-1\) and \(A \in U_\ell (t)\), we introduce the event \(B(t,\ell ,A)\) defined by

$$\begin{aligned} B(t,\ell ,&A) = \{h_i(u)=h_i(t), \; \forall u \in A \text{ and } \\&h_i(u) \ne h_i(t), \; \forall u \in \{1,\ldots ,n\} \setminus (A \cup \{t\})\}. \end{aligned}$$

From the independence of the hash function \(h_i\), we have

$$\mathbb {P}\{B(t,\ell ,A)\} = \left( \frac{1}{k}\right) ^\ell \left( 1 -\frac{1}{k}\right) ^{n-1-\ell }.$$

Let us consider the ratio

$${\mathcal {V}}_{i,t}={\mathcal {W}}[i][h_i(t)]/{\mathcal {F}}[i][h_i(t)].$$

For any \(i=0,\ldots ,n\), we define

$$R_\ell (t) = \left\{ \frac{f_tw_t + \sum _{u \in A} f_uw_u}{f_t + \sum _{u \in A} f_u}, \; A \in U_\ell (t)\right\} .$$

We have \(R_0(t) = \{w_t\}\). We introduce the set R(t) defined by

$$R(t) = \bigcup _{\ell =0}^{n-1} R_\ell (t).$$

Thus with probability 1,

$${\mathcal {W}}[i][h_i(t)]/{\mathcal {F}}[i][h_i(t)] \in R(t).$$

Let \(x \in R(t)\). We have

$$\begin{aligned}&\mathbb {P}\{{\mathcal {V}}_{i,t} = x\} \\&= \sum _{\ell =0}^{n-1} \sum _{A \in U_\ell (t)} \mathbb {P}\{{\mathcal {V}}_{i,t} = x \mid B(t,\ell ,A)\} \mathbb {P}\{B(t,\ell ,A)\} \\&= \sum _{\ell =0}^{n-1} \left( \frac{1}{k}\right) ^\ell \left( 1 -\frac{1}{k}\right) ^{n-1-\ell } \sum _{A \in U_\ell (t)} 1_{\{x =X(t,A)\}}. \end{aligned}$$

where X(tA) is the fraction:

$$X(t,A) = \frac{f_tw_t + \sum _{u \in A} f_uw_u}{f_t + \sum _{u \in A} f_u}$$

Thus

$$\begin{aligned}&\mathbb {E}\{{\mathcal {V}}_{i,t}\}\\&= \sum _{\ell =0}^{n-1} \left( \frac{1}{k}\right) ^\ell \left( 1 -\frac{1}{k}\right) ^{n-1-\ell } \sum _{A \in U_\ell (t)} \sum _{x \in R(t)} x1_{\{x = X(t,A)\}} \\&= \sum _{\ell =0}^{n-1} \left( \frac{1}{k}\right) ^\ell \left( 1 -\frac{1}{k}\right) ^{n-1-\ell } \sum _{A \in U_\ell (t)} X(t,A). \end{aligned}$$

Let us assume that all the \(f_u\) are equal, that is for each u, we have \(f_u = m/n\). The experimental evaluation tends to show that the worst case scenario of input streams is exhibited when all the items show the same number of occurrences in the input stream. We get

$$\begin{aligned}&\mathbb {P}\{{\mathcal {V}}_{i,t} = x\} \\&= \sum _{\ell =0}^{n-1} \left( \frac{1}{k}\right) ^\ell \left( 1 -\frac{1}{k}\right) ^{n-1-\ell } \sum _{A \in U_\ell (t)} 1_{\{x = \frac{w_t + \sum _{u \in A} w_u}{\ell + 1}\}} \end{aligned}$$

that concludes the proof.    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer-Verlag GmbH Germany, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rivetti, N., Busnel, Y., Querzoni, L. (2020). Load-Aware Shedding in Stream Processing Systems. In: Hameurlain, A., Tjoa, A.M. (eds) Transactions on Large-Scale Data- and Knowledge-Centered Systems XLVI. Lecture Notes in Computer Science(), vol 12410. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-62386-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-62386-2_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-62385-5

  • Online ISBN: 978-3-662-62386-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics