Skip to main content
Log in

A core calculus for dynamic delta-oriented programming

  • Original Article
  • Published:
Acta Informatica Aims and scope Submit manuscript

Abstract

Delta-oriented programming (DOP) is a flexible approach to the implementation of software product lines (SPLs). Delta-oriented SPLs consist of a code base (a set of delta modules encapsulating changes to object-oriented programs) and a product line declaration (providing the connection of the delta modules with the product features). In this paper, we present a core calculus that extends DOP with the capability to switch the implemented product configuration at runtime. A dynamic delta-oriented SPL is a delta-oriented SPL with a dynamic reconfiguration graph that specifies how to switch between different feature configurations. Dynamic DOP supports also (unanticipated) software evolution such that at runtime, the product line declaration, the code base and the dynamic reconfiguration graph can be changed in any (unanticipated) way that preserves the currently running product, which is essential when evolution affects existing features. The type system of our dynamic DOP core calculus ensures that the dynamic reconfigurations lead to type safe products and do not cause runtime type errors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. It is generally not possible to enumerate all possible configurations due the sheer number in any non-trivial SPL. However, there are other ways of representing valid configurations of an SPL, e.g., see [8] for other representations.

  2. The formalization performs the least amount of necessary updates. An implementation may choose to follow a more eager strategy.

  3. This implies that the running program conforms to configuration FlexDeleteLogBank.

  4. This implies that the running program conforms to configuration FlexDeleteCorporateBank.

  5. The calculus abstracts from the feature model concrete representation.

  6. In a full-fledged language, the access to \(\texttt {null}\) would raise an exception for which the reconfiguration operation should provide an appropriate handler.

  7. The first enabling condition ensures that \(\textit{meth}_{\overline{\psi }'}(\texttt {main},\texttt {Main})=\textit{meth}_{\overline{\psi }}(\texttt {main}, \texttt {Main})\).

References

  1. Aldrich, J., Sunshine,J., Saini, D., Sparks, Z.: Typestate-oriented programming. In: Proceedings of the 24th ACM SIGPLAN conference companion on object oriented programming systems languages and applications, OOPSLA ’09, pp. 1015–1022. ACM, New York 2009). doi:10.1145/1639950.1640073

  2. Alves, V., Matos,P. Jr, Cole, L., Vasconcelos, A., Borba, P., Ramalho, G.: Extracting and evolving code in product lines with aspect-oriented programming. In: Transactions on aspect-oriented software development IV, pp. 117–142. Springer, Berlin (2007)

  3. Ancona, D., Anderson, C., Damiani, F., Drossopoulou, S., Giannini, P., Zucca, E.: A provenly correct translation of fickle into java. ACM Trans. Program. Lang. Syst. 29(2), (2007). doi:10.1145/1216374.1216381

  4. Andrade, R., Rebêlo, H., Ribeiro, M., Borba, P.: Flexible feature binding with aspectj-based idioms. J. UCS 20(5), 692–719 (2014)

  5. Apel, S., Kästner, C., Größlinger, A., Lengauer, C.: Type safety for feature-oriented product lines. Autom. Softw. Eng. 17(3), 251–300 (2010). doi:10.1007/s10515-010-0066-8

    Article  Google Scholar 

  6. Apel, S., Leich, T., Rosenmller, M., Saake, G.: Featurec++: on the symbiosis of feature-oriented and aspect-oriented programming. In: Glück, R., Lowry, M. (eds.) Generative Programming and Component Engineering, volume 3676 of Lecture Notes in Computer Science, pp. 125–140. Springer, Berlin (2005). doi:10.1007/11561347_10

    Google Scholar 

  7. Aracic, I., Gasiunas, V., Mezini, M., Ostermann, K.: An overview of caesarj. In: Rashid, A., Aksit, M. (eds.) Transactions on Aspect-Oriented Software Development I. Lecture Notes in Computer Science, vol. 3880, pp. 135–173. Springer, Berlin (2006). doi:10.1007/11687061_5

    Google Scholar 

  8. Batory, D.: Feature models, grammars, and propositional formulas. In: Obbink, H., Pohl, K. (eds.) Software Product Lines. Lecture Notes in Computer Science, vol. 3714, pp. 7–20. Springer, Berlin (2005). doi:10.1007/11554844_3

    Google Scholar 

  9. Batory, D., Sarvela, J.N., Rauschmayer, A.: Scaling step-wise refinement. IEEE Trans. Softw. Eng. 30(6), 355–371 (2004). doi:10.1109/TSE.2004.23

    Article  Google Scholar 

  10. Bettini, L., Bono, V.: Type safe dynamic object delegation in class-based languages. In: Proceedings of the 6th international symposium on principles and practice of programming in Java, PPPJ ’08, pp. 171–180. ACM, New York (2008). doi:10.1145/1411732.1411756

  11. Bettini, L., Bono, V., Venneri, B.: Delegation by object composition. Sci. Comput. Program. 76(11):992–1014 (2011). doi:10.1016/j.scico.2010.04.006. http://www.sciencedirect.com/science/article/pii/S0167642310000754

  12. Bettini, L., Capecchi, S., Damiani, F.: On flexible dynamic trait replacement for java-like languages. Sci. Comput. Program., 78(7):907–932 (2013a). doi:10.1016/j.scico.2012.11.003. http://www.sciencedirect.com/science/article/pii/S0167642312002092

  13. Bettini, L., Damiani, F., Schaefer, I.: Compositional type checking of delta-oriented software product lines. Acta Informatica 50, 77–122 (2013b). doi:10.1007/s00236-012-0173-z

    Article  MathSciNet  MATH  Google Scholar 

  14. Bettini, L., Damiani, F., Schaefer, I.: Implementing type-safe software product lines using parametric traits. Sci. Comput. Program. (2013c). doi:10.1016/j.scico.2013.07.016. http://www.sciencedirect.com/science/article/pii/S0167642313001901

  15. Bubel, R., Damiani, F., Hähnle, R., Johnsen, E.B., Owe, O., Schaefer, I., Yu, I.C.: Proof repositories for compositional verification of evolving software systems, pp. 130–156. Springer International Publishing, Cham (2016). doi:10.1007/978-3-319-46508-1_8

  16. Capilla, R., Bosch, J., Trinidad, P., Ruiz-Corts, A., Hinchey, M.: An overview of dynamic software product line architectures and techniques: observations from research and industry. J. Syst. Softw. 91:3–23 (2014). doi:10.1016/j.jss.2013.12.038. http://www.sciencedirect.com/science/article/pii/S0164121214000119

  17. Previtali, S.C., Gross, T.R.: Aspect-based dynamic software updating: a model and its empirical evaluation. In: AOSD, pp. 105–116. ACM, New York (2011). doi:10.1145/1960275.1960289

  18. Chakravarthy, V., Regehr, J., Eide, E.: Edicts: implementing features with flexible binding times. In: Proceedings of the 7th international conference on aspect-oriented software development, AOSD ’08, pp. 108–119. ACM, New York (2008). doi:10.1145/1353482.1353496

  19. Cytron, R., Ferrante, J., Rosen, B.K., Wegman, M.N., Kenneth Zadeck, F.: Efficiently computing static single assignment form and the control dependence graph. ACM Trans. Program. Lang. Syst. 13(4), 451–490 (1991). doi:10.1145/115372.115320

    Article  Google Scholar 

  20. Damiani, F., Drossopoulou, S., Giannini, P.: Refined effects for unanticipated object re-classification: Fickle\(_{3}\). In: Blundo, C., Laneve, C. (eds.) Theoretical computer science. Lecture Notes in Computer Science, vol. 2841 pp. 97–110. Springer, Berlin (2003). doi:10.1007/978-3-540-45208-9_9

  21. Damiani, F., Gladisch, C., Tyszberowicz, S.: Refinement-based testing of delta-oriented product lines. In: Proceedings of the 2013 international conference on principles and practices of programming on the java platform: virtual machines, languages, and tools, PPPJ ’13, pp. 135–140. ACM, New York (2013). doi:10.1145/2500828.2500841

  22. Damiani, F., Lienhardt, M.: On Type Checking Delta-Oriented Product Lines, pp. 47–62. Springer International Publishing, Cham (2016). doi:10.1007/978-3-319-33693-0_4

    Google Scholar 

  23. Damiani, F., Owe, O., Dovland, J., Schaefer, I., Johnsen, E.B. , Yu, I.C.: A transformational proof system for delta-oriented programming. In: Proceedings of the 16th international software product line conference, vol. 2, SPLC ’12, pp. 53–60. ACM, New York (2012a). doi:10.1145/2364412.2364422

  24. Damiani, F., Padovani, L., Schaefer, I.: A formal foundation for dynamic delta-oriented software product lines. In: Proceedings of the 11th international conference on generative programming and component engineering, GPCE ’12, pp. 1–10. ACM, New York (2012b). doi:10.1145/2371401.2371403

  25. Damiani, F., Schaefer, I.: Dynamic delta-oriented programming. In: Proceedings of the 15th international software product line conference, volume 2, SPLC ’11, pp. 34:1–34:8. ACM, New York (2011). doi:10.1145/2019136.2019175

  26. Damiani, F., Schaefer, I.: Family-based analysis of type safety for delta-oriented software product lines, pp. 193–207. Springer, Berlin (2012). doi:10.1007/978-3-642-34026-0_15

  27. Dinkelaker, T., Mitschke, R., Fetzer, K., Mezini, M.: A dynamic software product line approach using aspect models at runtime. In: Fifth domain-specific aspect languages workshop, vol. 39, (2010)

  28. Drossopoulou, S., Damiani, F., Dezani-Ciancaglini, M., Giannini, P.: More dynamic object reclassification: Fickle&par. ACM Trans. Program. Lang. Syst. 24(2), 153–191 (2002). doi:10.1145/514952.514955

    Article  Google Scholar 

  29. Ducasse, S., Nierstrasz, O., Schärli, N., Wuyts, R., Black, A.P.: Traits: a mechanism for fine-grained reuse. ACM Trans. Program. Lang. Syst. 28(2), 331–388 (2006). doi:10.1145/1119479.1119483

    Article  Google Scholar 

  30. Ernst, E.: gbeta–a Language with Virtual Attributes, Block Structure, and Propagating, Dynamic Inheritance. PhD thesis, Department of Computer Science, University of Århus, Denmark (1999). http://www.daimi.au.dk/~eernst/gbeta/

  31. Groher, I., Voelter, M.: Aspect-oriented model-driven software product line engineering. In: Transactions on aspect-oriented software development VI, pp. 111–152. Springer (2009)

  32. Gu, T., Cao, C., Xu, C., Ma, X., Zhang, L., Lü, J.: Low-disruptive dynamic updating of java applications. Inf. Softw. Technol. 56(9):1086–1098 (2014). doi:10.1016/j.infsof.2014.04.003. http://www.sciencedirect.com/science/article/pii/S0950584914000846

  33. Günther, S., Sunkle, S,: Dynamically adaptable software product lines using ruby metaprogramming. In: Proceedings of the 2nd international workshop on feature-oriented software development, FOSD ’10, pp. 80–87. ACM, New York (2010). doi:10.1145/1868688.1868700

  34. Hähnle, R., Schaefer, I.: A liskov principle for delta-oriented programming. In: Margaria, T., Steffen, B. (eds.) Leveraging applications of formal methods, verification and validation. Technologies for Mastering Change. Lecture Notes in Computer Science, vol. 7609, pp. 32–46. Springer, Berlin (2012). doi:10.1007/978-3-642-34026-0_4

  35. Hallsteinsen, S., Hinchey, M., Park, S., Schmid, K.: Dynamic software product lines. Computer 41(4), 93–95 (2008). doi:10.1109/MC.2008.123

    Article  Google Scholar 

  36. Herrmann, S.: A precise model for contextual roles: The programming language ObjectTeams/Java. Appl. Ontol. 2(2):181–207 (2007). http://iospress.metapress.com/content/M186325145U8166N

  37. Hirschfeld, R., Costanza, P., Nierstrasz, O.: Context-oriented programming. J. Object Technol. 7(3):125–151 (2008). doi:10.5381/jot.2008.7.3.a4. http://www.jot.fm/contents/issue_2008_03/article4.html

  38. Igarashi, A., Pierce, B.C., Wadler, P.: Featherweight java: A minimal core calculus for java and gj. ACM Trans. Program. Lang. Syst. 23(3), 396–450 (2001). doi:10.1145/503502.503505

    Article  Google Scholar 

  39. Johnsen, E.B., Kyas, M., Yu, I.C.: Dynamic classes: modular asynchronous evolution of distributed concurrent objects. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009: formal methods. Lecture Notes in Computer Science, vol. 5850, pp 596–611. Springer, Berlin (2009). doi:10.1007/978-3-642-05089-3_38

  40. Kästner, C., Apel, S., Batory, D.: A case study implementing features using aspectJ. In: IEEE computer society software product line conference, pp. 223–232, Los Alamitos, CA (2007). doi:10.1109/SPLINE.2007.12

  41. Kästner, C., Apel, S., Kuhlemann, M.:. Granularity in software product lines. In: Proceedings of the 30th international conference on software engineering, ICSE ’08, pp. 311–320. ACM, New York (2008). doi:10.1145/1368088.1368131

  42. Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W.G.: An overview of aspectj. In: Knudsen, J.L. (eds.) ECOOP 2001 object-oriented programming. Lecture Notes in Computer Science, vol. 2072, pp. 327–354. Springer, Berlin (2001). doi:10.1007/3-540-45337-7_18

  43. Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Lopes, C., Loingtier, J.-M., Irwin, J.: Aspect-Oriented Programming. Springer, Berlin (1997)

    Book  Google Scholar 

  44. Koscielny, J., Holthusen, S., Schaefer, I., Schulze, S., Bettini, L., Damiani, F.: Deltaj 1.5: delta-oriented programming for java 1.5. In: PPPJ 2014, pp. 63–74 (2014)

  45. Krueger, C.: Eliminating the adoption barrier. IEEE Softw. 19(4), 29–31 (2002). doi:10.1109/MS.2002.1020284

    Article  Google Scholar 

  46. Lochau, M., Schaefer, I., Kamischke, J., Lity, S.: Incremental model-based testing of delta-oriented software product lines. In: Brucker, A.D., Julliand, J. (eds.) Tests and proofs. Lecture Notes in Computer Science, vol. 7305, pp. 67–82. Springer, Berlin (2012). doi:10.1007/978-3-642-30473-6_7

  47. Lopez-Herrejon, R.E., Batory, D., Cook, W.: Evaluating support for features in advanced modularization technologies. In: Black, A.P. (ed) ECOOP 2005–object-oriented programming. Lecture Notes in Computer Science, vol. 3586, pp. 169–194. Springer, Berlin (2005). doi:10.1007/11531142_8

  48. Pohl, K., Böckle, G., van der Linden, F.J.: Software Product Line Engineering–Foundations, Principles and Techniques. Springer, Berlin (2005)

    MATH  Google Scholar 

  49. Pukall, M., Kästner, C., Cazzola, W., Götz, S., Grebhahn, A., Schöter, R., Saake, G.: JavAdaptor–flexible runtime updates of java applications. Softw. Pract. Exp. 43(2), 153–185 (2013). doi:10.1002/spe.2107

    Article  Google Scholar 

  50. Ribeiro, M., Cardoso, R., Borba, P., Bonifácio, R., Rebêlo, H.: Does aspectj provide modularity when implementing features with flexible binding times? Third Latin American Workshop on Aspect-Oriented Software Development (LA-WASP 2009), Fortaleza, Ceara, Brazil, pp. 1–6 (2009)

  51. Rosenmüller, M., Siegmund, N., Apel, S., Saake, G.: Flexible feature binding in software product lines. Autom. Softw. Eng. 18(2), 163–197 (2011a). doi:10.1007/s10515-011-0080-5

    Article  Google Scholar 

  52. Rosenmüller, M., Siegmund, N., Pukall, M., Apel, S.: Tailoring dynamic software product lines. SIGPLAN Not. 47(3), 3–12 (2011b). doi:10.1145/2189751.2047866

    Google Scholar 

  53. Saini, D., Sunshine, J., Aldrich, J.: A theory of typestate-oriented programming. In: FTfJP, pp. 9:1–9:7. ACM (2010). doi:10.1145/1924520.1924529

  54. Schaefer, I., Bettini, L., Bono, V., Damiani, F., Tanzarella, N.: Delta-oriented programming of software product lines. In: Bosch, J., Lee, J. (eds.) Software product lines: going beyond. Lecture Notes in Computer Science, vol. 6287, pp. 77–91. Springer, Berlin (2010). doi:10.1007/978-3-642-15579-6_6

  55. Schaefer, I., Damiani, F.: Pure delta-oriented programming. In: Proceedings of the 2nd international workshop on feature-oriented software development, FOSD ’10, pp. 49–56. ACM, New York (2010). doi:10.1145/1868688.1868696

  56. Schaefer, I., Rabiser, R., Clarke, D., Bettini, L., Benavides, D., Botterweck, G., Pathak, A., Trujillo, S., Villela, K.: Software diversity: state of the art and perspectives. STTT 14(5), 477–495 (2012). doi:10.1007/s10009-012-0253-y

    Article  Google Scholar 

  57. Seidl, C., Schaefer, I., Aßmann, U.: Integrated management of variability in space and time in software families. In: Proceedings of the 18th international software product line conference (SPLC), SPLC’14 (2014)

  58. Smaragdakis, Y., Batory, D.: Mixin layers: An object-oriented implementation technique for refinements and collaboration-based designs. ACM Trans. Softw. Eng. Methodol. 11(2), 215–255 (2002). doi:10.1145/505145.505148

    Article  Google Scholar 

  59. Smith, C., Drossopoulou, S.: Chai: Traits for java-like languages. In: Black, A.P. (ed.) ECOOP 2005–object-oriented programming. Lecture Notes in Computer Science, vol. 3586, pp. 453–478. Springer, Berlin (2005). doi:10.1007/11531142_20

  60. Tarr, P., Ossher, H., Harrison, W., Sutton, S.M. Jr.: N degrees of separation: Multi-dimensional separation of concerns. In: Proceedings of the 21st International conference on software engineering, ICSE ’99, pp. 107–119. ACM, New York (1999). doi:10.1145/302405.302457

  61. Würthinger, T., Wimmer, C., Stadler, L.: Unrestricted and safe dynamic code evolution for java. Sci. Comput. Program. 78(5):481–498 (2013). doi:10.1016/j.scico.2011.06.005. http://www.sciencedirect.com/science/article/pii/S0167642311001456

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ferruccio Damiani.

Additional information

This work has been partially supported by: project HyVar (www.hyvar-project.eu), which has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 644298; by ICT COST Action IC1402 ARVI (www.cost-arvi.eu); by ICT COST Action IC1201 BETTY (http://www.behavioural-types.eu/); and by Ateneo/CSP D16D15000360005 project RunVar.

Appendix: Proof of Theorem 1 (Type Soundness)

Appendix: Proof of Theorem 1 (Type Soundness)

In order to be able to formulate the type soundness of IFD\(\Delta \)J by means of a subject-reduction theorem and a progress theorem for the small-step semantics, we need to formulate a type system for runtime expressions. Expressions containing a stupid selection, i.e., a field selection \(\texttt {null}.\texttt {f}\) or a method invocation \(\texttt {null}.\texttt {m}(\cdot \cdot \cdot )\), are not well typed according to the \(\textsc {IFJ}\) source level type system (cf. Fig. 10). However, a runtime expression without stupid selections may reduce to a runtime expression containing a stupid selection. The type system for runtime expressions contains a rule for assigning any type \(\texttt {T}\) to the value \(\texttt {null}\) (so that stupid selection can be typed).

Fig. 15
figure 15

IFD\(\Delta \)J: Rules for well-formed heap environments

An heap (type) assumption \(\varSigma \) is a mapping from addresses to class names. The empty-heap assumption is denoted by \({\bullet }\). A lazy-heap assumption \(\varTheta \) is either a heap assumption \(\varSigma \) or a partially reconfigured heap assumption of the form

$$\begin{aligned} \varSigma _n :\texttt {R}_n( \varSigma _{n-1} :\texttt {R}_{n-1}( \cdots \varSigma _1 :\texttt {R}_{1}( \varSigma _0)\cdots )) \end{aligned}$$

The head domain of \(\varTheta \) is \(\textit{head-dom}(\varTheta )=\textit{dom}(\varSigma _n)\), the tail domain of \(\varTheta \) is \(\textit{tail-dom}(\varTheta )=\bigcup _{i\in 0..n-1}\textit{dom}(\varSigma _i)\), the full domain of \(\varTheta \) is \(\textit{full-dom}(\varTheta )=\textit{head-dom}(\varTheta )\cup \textit{tail-dom}(\varTheta )\). We say that the lazy-heap assumption is well formed w.r.t. the feature configuration \(\overline{\psi }\) to mean that the judgement \(\Vdash _{\overline{\psi }} \varTheta \) can be derived by the rules in Fig. 15. All the lazy-heap assumptions mentioned in the rest of this paper are well-formed w.r.t. a feature configuration that is either explicitly mentioned or understood from the context. According to rule (WF-HeapEnv), a heap assumption \(\varSigma \) is well formed in the configuration \(\overline{\psi }\) if every class mentioned in \(\varSigma \) is defined in \(\overline{\psi }\). A lazy heap assumption \(\varSigma : \texttt {R}(\varTheta )\) is well formed in the configuration \(\overline{\psi }\) if so is \(\varSigma \) and if \(\varTheta \) is well formed in the target reconfiguration \(\overline{\psi }'\) of \(\texttt {R}\). Additionally, all addresses in \(\textit{dom}(\varSigma )\) that are not in the domain of the topmost heap in \(\varTheta \) must refer to objects created after any other object in \(\varTheta \), and the correspondence between the class of the objects whose address in \(\varSigma \) and the class of the same objects in \(\varTheta \) is given by \(\texttt {R}\).

As reductions may create and reconfigure objects, (lazy) heap assumptions need to be updated accordingly. To this aim, we define the relation \(\Supset \) between lazy heap assumptions inductively as follows:

$$\begin{aligned} \frac{\varSigma \supseteq \varSigma ' }{\varSigma \Supset \varSigma '} \qquad \frac{ \varSigma \supseteq \varSigma '\quad \varTheta \Supset \varTheta ' }{\varSigma : \texttt {R}(\varTheta ) \Supset \varSigma ' : \texttt {R}(\varTheta ')} \end{aligned}$$
Fig. 16
figure 16

Typing rules for runtime expressions, lazy heaps and states for Imperative Featherweight Dynamic Delta Java (IFD\(\Delta \)J)

Evaluation contexts, which reflect the congruence rules (see Fig. 13), are defined as follows:

$$\begin{aligned} \begin{array}{@{}r@{~}c@{~}l@{}} E&\mathbf{{:}{:}=}&[\,] \;\big \vert \;E.\texttt {f}\;\big \vert \;E.\texttt {m}(\overline{\texttt {e}}) \;\big \vert \;\texttt {v}.\texttt {m}(\overline{\texttt {v}},E,\overline{\texttt {e}}) \;\big \vert \;E.\texttt {f}=\texttt {e}\;\big \vert \;\texttt {v}.\texttt {f}=E\;\big \vert \;\texttt {return}(E) \end{array} \end{aligned}$$

A run-time expression \(\textit{e}\) is well formed w.r.t. a stack \(\overline{\iota }=\iota _1\iota _2\cdots \iota _n\,(n\ge 0)\), written \(\mathbf wf (\overline{\iota },\textit{e})\), if \(\textit{e}\) is of the form

$$\begin{aligned} E_1[ \texttt {return}(E_2[ \texttt {return}(\cdots E_n[\texttt {return}(\textit{e})]\cdots )]) ] \end{aligned}$$

where \(E_1\), ..., \(E_n, \textit{e}\) do not contain occurrences of \(\texttt {return}\).

Typing rules for runtime expressions are shown in Fig. 16; these rules are of the shape \(\varTheta \vdash _{\overline{\psi }}\overline{\iota },\,\textit{e}: \texttt {T}\). In Fig. 16 we also present the notion of well-formed lazy heap and of well-formed state. The notion of well-formed lazy heap ensures that the environment \(\varTheta \) maps all the addresses in the lazy heap into the type of the corresponding object and that for every object stored in the lazy heap, the fields of the object contain appropriate values.

The next lemma states that the \(\textit{olookup}\) function returns an object of the expected type and a new lazy heap that is well typed if so is the original lazy heap. In fact, the statement of the lemma involves a number of auxiliary functions of Fig. 14 as these functions are mutually recursive and therefore it is necessary to prove their correctness collectively. Well-foundedness of these functions (and of the proof of the lemma) is guaranteed by the fact that a lazy heap consists of a finite number of reconfigurations.

Lemma 1

Let (*) \(\varTheta \Vdash _{\overline{\psi }} {\mathscr {L}}\) and suppose \(\vdash \texttt {R}~\mathbf {ok}\) for all clauses \(\texttt {R}\) occurring in \({\mathscr {L}}\). Then:

  1. 1.

    if \(\textit{olookup}(\iota ,{\mathscr {L}}) = \mathtt {o},{\mathscr {L}}'\), then there exists \(\varTheta ' \Supset \varTheta \) such that \(\varTheta ' \Vdash _{\overline{\psi }} {\mathscr {L}}'\) and \({\mathscr {L}}'(\iota ) = \mathtt {o}\);

  2. 2.

    if \(\textit{oreconf}(\iota ,\texttt {R},{\mathscr {L}}) = \mathtt {o},\mathscr {H},{\mathscr {L}}'\), then there exist \(\varSigma \) and \(\varTheta ' \Supset \varTheta \) such that \(\{ \iota : \texttt {R}(\textit{clookup}(\iota ,\varTheta )) \},\varSigma :\texttt {R}(\varTheta ') \Vdash _{\overline{\psi }} \{\iota \mapsto \mathtt {o}\},\mathscr {H}:\texttt {R}({\mathscr {L}}')\) where \(\texttt {R}= \overline{\psi }' \Rightarrow \overline{\psi } \{ {\cdots } \}\);

  3. 3.

    if \(\overline{\psi };\textit{clookup}(\iota ,\varTheta ) \vdash \texttt {A}\,\texttt {y}=\texttt {p} \; {\mathbf {ok}}\) and \(\textit{preeval}(\texttt {p}[\iota /\texttt {this}],{\mathscr {L}}) = \texttt {v}, {\mathscr {L}}'\), then there exists \(\varTheta ' \Supset \varTheta \) such that \(\varTheta ' \vdash _{\overline{\psi }} \texttt {v}: \texttt {D}\) and \(\texttt {D}<:_{\overline{\psi }} \texttt {A}\);

  4. 4.

    if \(\overline{\psi }; \textit{clookup}(\iota ,\varTheta ) \vdash \texttt {p}: \texttt {D}\) and \(\textit{preeval}(\texttt {p}[\iota /\texttt {this}],{\mathscr {L}}) = \texttt {v}, {\mathscr {L}}'\), then there exists \(\varTheta ' \Supset \varTheta \) such that \(\varTheta ' \Vdash _{\overline{\psi }} {\mathscr {L}}'\) and \(\varTheta ' \vdash _{\overline{\psi }} \texttt {v}: \texttt {D}'\) and \(\texttt {D}' <:_{\overline{\psi }} \texttt {D}\).

Proof

We prove all items simultaneously, and we proceed by induction on the depth of \({\mathscr {L}}\), where the depth of a lazy heap is the number of reconfigurations that occur in it (an heap \(\mathscr {H}\) has depth 0):

  1. 1.

    In the base case it must be \({\mathscr {L}}= \mathscr {H}\) for some \(\mathscr {H}\) and \(\mathtt {o} = \mathscr {H}(\iota )\). We conclude immediately by taking \(\varTheta ' = \mathscr {H}\).

    In the inductive case we have \({\mathscr {L}}= \mathscr {H}:\texttt {R}({\mathscr {L}}_1)\) where \(\texttt {R}= \overline{\psi }' \Rightarrow \overline{\psi } \{ {\cdots } \}\) and we distinguish two subcases. If \(\iota \in \textit{dom}(\mathscr {H})\), then \(\mathtt {o} = \mathscr {H}(\iota )\) and we conclude immediately by taking \(\varTheta ' = \varTheta \). Suppose \(\iota \not \in \textit{dom}(\mathscr {H})\). Then (O1) \(\textit{olookup}(\iota ,{\mathscr {L}}_1) = \_,{\mathscr {L}}_1'\) and (O2) \(\textit{oreconf}(\iota ,\texttt {R},{\mathscr {L}}_1') = \mathtt {o},\mathscr {H}',{\mathscr {L}}_1''\) and \({\mathscr {L}}' = \mathscr {H}\cup \{ \iota \mapsto \mathtt {o} \} \cup \mathscr {H}' : \texttt {R}({\mathscr {L}}_1'')\). From (*) we deduce \(\varTheta = \varSigma : \texttt {R}(\varTheta _1)\) for some \(\varSigma \) and \(\varTheta _1\) such that \(\varTheta _1 \Vdash _{\overline{\psi }'} {\mathscr {L}}_1\). From (O1) and the induction hypothesis we deduce that there exist \(\varTheta _1' \Supset \varTheta _1\) such that \(\varTheta _1' \Vdash _{\overline{\psi }'} {\mathscr {L}}_1'\). From (O2) and item (2) we deduce that there exist \(\varSigma '\) and \(\varTheta _1'' \Supset \varTheta _1'\) such that \(\{ \iota : \texttt {R}(\textit{clookup}(\iota ,\varTheta )) \},\varSigma ':\texttt {R}(\varTheta _1'') \Vdash _{\overline{\psi }} \{\iota \mapsto \mathtt {o}\},\mathscr {H}:\texttt {R}({\mathscr {L}}_1'')\). We conclude by taking \(\varTheta ' = \varSigma \cup \{ \iota : \textit{clookup}(\iota ,\varTheta )) \} \cup \varSigma ' : \texttt {R}(\varTheta _1'')\).

  2. 2.

    Follows from item (3) and Lemma 3.

  3. 3.

    From (T-PreAssign) we deduce \(\overline{\psi }; \textit{clookup}(\iota ,\varTheta ) \vdash \texttt {p}: \texttt {A}'\) and \(\texttt {A}' <:_{\overline{\psi }} \texttt {A}\). By item (4) we deduce that there exists \(\varTheta ' \Supset \varTheta \) such that \(\varTheta ' \vdash _{\overline{\psi }} \texttt {v}: \texttt {D}\) with \(\texttt {D}<:_{\overline{\psi }} \texttt {A}'\). We conclude by transitivity of \(<:_{\overline{\psi }}\).

  4. 4.

    We proceed by induction on the structure of \(\texttt {p}\) and by cases on the rule for \(\textit{preeval}\) applied:

    • (\(\texttt {p}= \texttt {this}\)) Then \(\texttt {v}= \iota \) and we conclude by taking \(\varTheta ' = \varTheta \) and \(\texttt {D}= \texttt {D}' = \textit{clookup}(\iota ,\varTheta )\).

    • (\(\texttt {p}= \texttt {p}'.\texttt {f}\) and \(\textit{preeval}(\texttt {p}'[\iota /\texttt {this}],{\mathscr {L}}) = \texttt {null}, {\mathscr {L}}'\)). Then \(\texttt {v}= \texttt {null}\). By induction hypothesis there exists \(\varTheta ' \Supset \varTheta \) such that \(\varTheta ' \Vdash _{\overline{\psi }} {\mathscr {L}}'\). We conclude by taking \(\texttt {D}' = \texttt {D}\).

    • (\(\texttt {p}= \texttt {p}'.\texttt {f}\) and \(\textit{preeval}(\texttt {p}'[\iota /\texttt {this}],{\mathscr {L}}) = \iota ', {\mathscr {L}}''\) and \(\textit{olookup}(\iota ',{\mathscr {L}}'') = \langle \texttt {C}', \overline{\texttt {f}} = \overline{\texttt {u}}\rangle , {\mathscr {L}}'\)). From rule (T-PreExpField) we deduce \(\overline{\psi }; \textit{clookup}(\iota ,\varTheta ) \vdash \texttt {p}' : \texttt {D}_0\) and \(\texttt {D}\,\texttt {f}\in { fields}_{\overline{\psi }}(\texttt {D}_0)\). By induction hypothesis (on \(\texttt {p}'\)) there exists \(\varTheta '' \Supset \varTheta \) such that \(\varTheta '' \vdash \iota ' : \texttt {D}_0'\) and \(\texttt {D}_0' <:_{\overline{\psi }} \texttt {D}_0\), therefore \(\texttt {D}\,\texttt {f}\in { fields}_{\overline{\psi }}(\texttt {D}_0')\). By induction hypothesis (on the depth of \(\varTheta ''\)) there exists \(\varTheta ' \Supset \varTheta ''\) such that \(\varTheta ' \Vdash _{\overline{\psi }} {\mathscr {L}}'\). From (*) and (WF-Heap) we conclude \(\varTheta ' \vdash _{\overline{\psi }} \texttt {v}: \texttt {D}'\) for some \(\texttt {D}' <:_{\overline{\psi }} \texttt {D}\).

\(\square \)

The next two lemmas prove fundamental properties about the post-reconfiguration clauses. As the code in these clauses can only create objects in the current feature configuration, the lazy heap is unaffected by their execution except for the topmost level.

Lemma 2

Let \(\texttt {R}= \overline{\psi } \Rightarrow \overline{\psi }'\{{\cdots }\}\) and:

  1. 1.

    \(\varSigma _1 \vdash \texttt {v}_i : \texttt {C}_i\) and \(\texttt {C}_i <:_{\overline{\psi }} \texttt {A}_i\) for every \(i\in 1..|\overline{\texttt {v}}|\);

  2. 2.

    \(\varSigma _2 \vdash \texttt {u}_i : \texttt {D}_i\) and \(\texttt {D}_i <:_{\overline{\psi }'} \texttt {B}_i\) for every \(i\in 1..|\overline{\texttt {u}}|\);

  3. 3.

    \(\texttt {R}; \overline{\texttt {y}:\texttt {A}}; \overline{\texttt {z}:\texttt {B}} \vdash \texttt {B}\,\texttt {z}=\texttt {q} \; {\mathbf {ok}}\);

  4. 4.

    \(\textit{posteval}(\overline{\psi }',\texttt {q}[\overline{\texttt {v}}/{\overline{\texttt {y}}}][\overline{\texttt {u}}/{\overline{\texttt {z}}}]) = \texttt {u}, \mathscr {H}\).

Then there exists \(\varSigma \) such that:

  • \(\texttt {R}(\varSigma _1), \varSigma _2, \varSigma \vdash _{\overline{\psi }'} \mathscr {H}\);

  • \(\texttt {R}(\varSigma _1), \varSigma _2, \varSigma \vdash \texttt {u}: \texttt {D}\) and \(\texttt {D}<:_{\overline{\psi }'} \texttt {B}\).

Proof

By cases on the shape of \(\texttt {q}\).

  • (\(\texttt {q}= \texttt {null}\)) We conclude by taking \(\varSigma = {\bullet }\) and \(\texttt {D}= \texttt {B}\).

  • (\(\texttt {q}= \texttt {y}_i\)) By hypothesis we have \(\varSigma _1 \vdash \texttt {v}_i : \texttt {C}_i\) and \(\texttt {C}_i <:_{\overline{\psi }} \texttt {A}_i\). From (T-PostAssignVar) we deduce \(\texttt {R}(\varSigma _1) \vdash \texttt {v}_i : \texttt {R}(\texttt {C}_i)\) and \(\texttt {R}(\texttt {C}_i) <:_{\overline{\psi }'} \texttt {B}\). We conclude by taking \(\varSigma = {\bullet }\) and \(\texttt {D}= \texttt {R}(\texttt {C}_i)\).

  • (\(\texttt {q}= \texttt {new}~\texttt {C}(\texttt {z}_1,\dots ,\texttt {z}_n)\)) Then \(\texttt {u}= \iota \) and \(\mathscr {H}= \{ \iota \mapsto \langle \texttt {C}, \texttt {f}_1 = \texttt {z}_1[\overline{\texttt {u}}/{\overline{\texttt {z}}}],\dots , \texttt {f}_n = \texttt {z}_n[\overline{\texttt {u}}/{\overline{\texttt {z}}}] \rangle \}\). Let \(\varSigma = \{ \iota : \texttt {C}\}\) and \(\texttt {D}= \texttt {C}\) and \(\texttt {D}_1\,\texttt {f}_1,\ldots ,\texttt {D}_n\,\texttt {f}_n = { fields}_{\overline{\psi }'}(\texttt {C})\). From rule (T-PostAssignNew) we have \(\texttt {D}= \texttt {C}<:_{\overline{\psi }'} \texttt {B}\). Also, from the same rule we deduce that for every \(i\in 1..n\) we have \(\texttt {z}_i : \texttt {B}_i \in \overline{\texttt {z}:\texttt {B}}\) and \(\texttt {B}_i <:_{\overline{\psi }'} \texttt {D}_i\). Now \(\texttt {z}_i[\overline{\texttt {u}}/{\overline{\texttt {z}}}] = \texttt {u}_j\) for some \(j\in 1..|\overline{\texttt {u}}|\). By hypothesis we have \(\varSigma _2 \vdash \texttt {u}_j : \texttt {D}_j\) and \(\texttt {D}_j <:_{\overline{\psi }'} \texttt {B}_j\). We conclude \(\texttt {R}(\varSigma _1),\varSigma _2,\varSigma \vdash _{\overline{\psi }'} \mathscr {H}\) by transitivity of \(<:_{\overline{\psi }}\).

\(\square \)

Lemma 3

Let \(\texttt {R}= \overline{\psi } \Rightarrow \overline{\psi }'\{{\cdots }\}\) and:

  1. 1.

    \(\varSigma _1 \vdash \texttt {v}_i : \texttt {C}_i\) and \(\texttt {C}_i <:_{\overline{\psi }} \texttt {A}_i\) for every \(i\in 1..|\overline{\texttt {v}}|\);

  2. 2.

    \(\varSigma _2 \vdash \texttt {u}_i : \texttt {D}_i\) and \(\texttt {D}_i <:_{\overline{\psi }'} \texttt {B}_i\) for every \(i\in 1..|\overline{\texttt {u}}|\);

  3. 3.

    \(\texttt {R}; \overline{\texttt {y}:\texttt {A}}; \overline{\texttt {z}:\texttt {B}} \vdash \overline{\texttt {B}'\,\texttt {z}'=\texttt {q}} \; {\mathbf {ok}}\);

  4. 4.

    \(\textit{postassign}(\overline{\psi }',\overline{\texttt {B}'\,\texttt {z}'=\texttt {q}}[\overline{\texttt {v}}/{\overline{\texttt {y}}}][\overline{\texttt {u}}/{\overline{\texttt {z}}}]) = \overline{\texttt {u}}', \mathscr {H}\);

Then there exists \(\varSigma \) such that:

  • \(\texttt {R}(\varSigma _1), \varSigma _2, \varSigma \vdash _{\overline{\psi }'} \mathscr {H}\);

  • \(\texttt {R}(\varSigma _1), \varSigma _2, \varSigma \vdash \texttt {u}'_i : \texttt {D}_i'\) and \(\texttt {D}_i' <:_{\overline{\psi }'} \texttt {B}'_i\) for every \(i\in 1..|\overline{\texttt {u}}'|\).

Proof

By induction on \(|\overline{\texttt {u}}'|\). If \(|\overline{\texttt {u}}'| = 0\), then \(\mathscr {H}= \emptyset \) and we conclude immediately by taking \(\varSigma = {\bullet }\). Suppose \(|\overline{\texttt {u}}'| > 0\). Then \(\overline{\texttt {B}'\,\texttt {z}'=\texttt {q}} = \texttt {B}'\,\texttt {z}'=\texttt {q}\overline{\texttt {B}''\,\texttt {z}''=\texttt {q}'}\) and \(\overline{\texttt {u}}' = \texttt {u}'\overline{\texttt {u}}''\) where

  • \(\textit{posteval}(\overline{\psi }',\texttt {q}'[\overline{\texttt {v}}/{\overline{\texttt {y}}}][\overline{\texttt {u}}/{\overline{\texttt {z}}}]) = \texttt {u}', \mathscr {H}\)

  • \(\textit{postassign}(\overline{\psi }',\overline{\texttt {B}''\,\texttt {z}''=\texttt {q}'}[\overline{\texttt {v}}/{\overline{\texttt {y}}}][\overline{\texttt {u}}\texttt {u}'/{\overline{\texttt {z}}}\texttt {z}']) = \overline{\texttt {u}}'', \mathscr {H}'\)

From Lemma 2 we deduce that there exists \(\varSigma '\) such that \(\texttt {R}(\varSigma _1),\varSigma _2,\varSigma ' \vdash _{\overline{\psi }'} \mathscr {H}\) and \(\texttt {R}(\varSigma _1),\varSigma _2,\varSigma ' \vdash \texttt {u}' : \texttt {D}'\) and \(\texttt {D}' <:_{\overline{\psi }'} \texttt {B}'\).

By induction hypothesis we deduce that there exists \(\varSigma ''\) such that \(\texttt {R}(\varSigma _1),\varSigma _2,\varSigma '' \vdash _{\overline{\psi }'} \mathscr {H}'\) and for every \(i\in 1..|\overline{\texttt {u}}''|\) we have \(\texttt {R}(\varSigma _1),\varSigma _2,\varSigma '' \vdash \texttt {u}_i'' : \texttt {D}_i'\) and \(\texttt {D}_i' <:_{\overline{\psi }'} \texttt {B}_i'\).

We conclude by taking \(\varSigma = \varSigma ',\varSigma ''\). \(\square \)

We are now ready to formally state the subject reduction theorem, whose proof relies upon the auxiliary lemmas just presented.

Theorem 2

(Subject reduction) Let \(\varTheta \vdash _{\overline{\psi }}{\mathscr {L}}, \overline{\iota }, \textit{e}: \texttt {T}\) and \(\textit{meth}_{\overline{\psi }}(\texttt {main}, \texttt {Main}) = \texttt {C}_0 \; \texttt {main}()\{\_\}\). Then:

  1. 1.

    if \(\overline{\psi }, {\mathscr {L}}, \overline{\iota }, \textit{e} \longrightarrow \overline{\psi }, {\mathscr {L}}', \overline{\iota }', \textit{e}'\), then there exist \(\varTheta ' \Supset \varTheta \) and \(\texttt {T}' <:_{\overline{\psi }} \texttt {T}\) such that \(\varTheta ' \vdash _{\overline{\psi }}{\mathscr {L}}', \overline{\iota }', \textit{e}': \texttt {T}'\).

  2. 2.

    if \(\overline{\psi }, {\mathscr {L}}, \overline{\iota }, \textit{e} \Longrightarrow \overline{\psi }', \emptyset :\texttt {R}({\mathscr {L}}), \overline{\iota }, \textit{e}\), then there exists \(\texttt {T}' <:_{\overline{\psi }'} \texttt {C}_0\) such that \({\bullet }: \texttt {R}(\varTheta ) \vdash _{\overline{\psi }'} (\emptyset : \texttt {R}({\mathscr {L}})), \overline{\iota }, \textit{e} : \texttt {T}'\).

Proof

The proof of item (1) is almost standard, the only notable exception being the fact that heap is accessed through the auxiliary function \(\textit{olookup}(\_,\_)\) which may trigger object reconfiguration. Lemma 1 guarantees that the object returned by \(\textit{olookup}(\_,\_)\) has the right type and that the new heap is still well formed in the current configuration. Item (2) is obvious, since the new heap environment is updated according to the structure of the new heap and the \(\mathbf {Enabled}\) predicate guarantees that the runtime expression \(\textit{e}\) is still well typed in the new configuration. \(\square \)

The progress theorem and its proof are standard.

Theorem 3

(Progress) Let \(\varTheta \vdash _{\overline{\psi }}{\mathscr {L}}, \overline{\iota }, \textit{e}: \texttt {T}\) and . Then:

  1. 1.

    either \(\textit{e}\) is a value, or

  2. 2.

    for some evaluation context \(E\) we can express \(\textit{e}\) as

    1. (a)

      \(E[\texttt {null}.\texttt {f}]\) for some \(\texttt {f}\), or

    2. (b)

      \(E[\texttt {null}.\texttt {f}= \texttt {v}]\) for some \(\texttt {f}\) and \(\texttt {v}\), or

    3. (c)

      \(E[\texttt {null}.\texttt {m}(\overline{\texttt {v}})]\) for some \(\texttt {m}\) and \(\overline{\texttt {v}}\).

We conclude with type soundness, which is a straightforward corollary of subject reduction and progress.

Proof of Theorem 1

(Type Soundness) Immediate by Theorems 2 and 3. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Damiani, F., Padovani, L., Schaefer, I. et al. A core calculus for dynamic delta-oriented programming. Acta Informatica 55, 269–307 (2018). https://doi.org/10.1007/s00236-017-0293-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00236-017-0293-6

Navigation