Skip to main content

Attuning Adaptation Rules via a Rule-Specific Neural Network

  • Conference paper
  • First Online:
Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning (ISoLA 2022)

Abstract

There have been a number of approaches to employing neural networks (NNs) in self-adaptive systems; in many cases, generic NNs/deep learning are utilized for this purpose. When this approach is to be applied to improve an adaptation process initially driven by logical adaptation rules, the problem is that (1) these rules represent a significant and tested body of domain knowledge, which may be lost if they are replaced by an NN, and (2) the learning process is inherently demanding given the black-box nature and the number of weights in generic NNs to be trained. In this paper, we introduce the rule-specific Neural Network (rsNN) method that makes it possible to transform the guard of an adaptation rule into an rsNN, the composition of which is driven by the structure of the logical predicates in the guard. Our experiments confirmed that the black box effect is eliminated, the number of weights is significantly reduced, and much faster learning is achieved while the accuracy is preserved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://trust40.ipd.kit.edu/home/.

  2. 2.

    https://github.com/smartarch/trust4.0-demo.

  3. 3.

    https://www.tensorflow.org/ (version 2.4).

  4. 4.

    We divide the data only to the training and testing set (testing set holds \(10\%\) of data). We do not need a validation set since we do not perform any hyper-parameter training.

References

  1. Paper results replicaton package. https://github.com/smartarch/attuning-adaptation-rules-replication-package

  2. Anaya, I.D.P., Simko, V., Bourcier, J., Plouzeau, N., Jézéquel, J.M.: A prediction-driven adaptation approach for self-adaptive sensor networks. In: Proceedings of SEAMS 2014, Hyderabad, India (2014)

    Google Scholar 

  3. Bierzynski, K., Lutskov, P., Assmann, U.: Supporting the self-learning of systems at the network edge with microservices. In: 13th International Conference and Exhibition on Smart Systems Integration Issues of Miniaturized Systems (2019)

    Google Scholar 

  4. Bureš, T., Gerostathopoulos, I., Hnětynka, P., Pacovský, J.: Forming ensembles at runtime: a machine learning approach. In: Proceedings of ISOLA 2020, Rhodes, Greece (2020)

    Google Scholar 

  5. Chen, T., Bahsoon, R.: Self-adaptive and online QoS modeling for cloud-based software services. IEEE Trans. Softw. Eng. 43(5), 453–475 (2017)

    Article  Google Scholar 

  6. Gabor, T., et al.: The scenario coevolution paradigm: adaptive quality assurance for adaptive systems. Int. J. Softw. Tools Technol. Transfer 22(4), 457–476 (2020)

    Article  Google Scholar 

  7. Ghahremani, S., Adriano, C.M., Giese, H.: Training prediction models for rule-based self-adaptive systems. In: Proceedings of ICAC 2018, Trento, Italy (2018)

    Google Scholar 

  8. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). https://www.deeplearningbook.org

  9. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press (2001)

    Google Scholar 

  10. Jamshidi, P., Pahl, C., Mendonça, N.C.: Managing uncertainty in autonomic cloud elasticity controllers. IEEE Cloud Comput. 3(3), 50–60 (2016)

    Article  Google Scholar 

  11. Mańdziuk, J., Macukow, B.: A neural network performing Boolean logic operations. Opt. Memory Neural Netw. 2(1), 17–35 (1993)

    Google Scholar 

  12. Muccini, H., Vaidhyanathan, K.: A machine learning-driven approach for proactive decision making in adaptive architectures. In: Companion Proceedings of ICSA 2019, Hamburg, Germany (2019)

    Google Scholar 

  13. Salehie, M., Tahvildari, L.: Self-adaptive software: landscape and research challenges. ACM Trans. Auton. Adapt. Syst. 4(2), 1–42 (2009)

    Article  Google Scholar 

  14. Schwenker, F., Kestler, H.A., Palm, G.: Three learning phases for radial-basis-function networks. Neural Netw. 14(4), 439–458 (2001)

    Article  Google Scholar 

  15. Stein, A., Tomforde, S., Diaconescu, A., Hähner, J., Müller-Schloer, C.: A concept for proactive knowledge construction in self-learning autonomous systems. In: Proceedings of FAS*W 2018, Trento, Italy (2018)

    Google Scholar 

  16. Van Der Donckt, J., Weyns, D., Quin, F., Van Der Donckt, J., Michiels, S.: Applying deep learning to reduce large adaptation spaces of self-adaptive systems with multiple types of goals. In: Proceedings of SEAMS 2020, Seoul, Korea. ACM (2020)

    Google Scholar 

  17. Weyns, D., et al.: Towards better adaptive systems by combining mape, control theory, and machine learning. In: Proceedings of SEAMS 2021, Madrid, Spain (2021)

    Google Scholar 

  18. Zhao, T., Zhang, W., Zhao, H., Jin, Z.: A reinforcement learning-based framework for the generation and evolution of adaptation rules. In: Proceedings of ICAC 2017, Columbus, OH, USA (2017)

    Google Scholar 

Download references

Acknowledgment

This work has been funded by the DFG (German Research Foundation) - project number 432576552, HE8596/1-1 (FluidTrust), supported by the Czech Science Foundation project 20-24814J, partially supported by Charles University institutional funding SVV 260588 and the KASTEL institutional funding, and partially supported by the Charles University Grant Agency project 408622.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Petr Hnětynka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bureš, T. et al. (2022). Attuning Adaptation Rules via a Rule-Specific Neural Network. In: Margaria, T., Steffen, B. (eds) Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning. ISoLA 2022. Lecture Notes in Computer Science, vol 13703. Springer, Cham. https://doi.org/10.1007/978-3-031-19759-8_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19759-8_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19758-1

  • Online ISBN: 978-3-031-19759-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics