Skip to main content

Behaviours as Design Components of Cyber-Physical Systems

  • Chapter
  • First Online:
Software Engineering (LASER 2013, LASER 2014)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8987))

Abstract

System behaviour is proposed as the core object of software development. The system comprises both the software machine and the problem world. The behaviour of the problem world is ensured by the combination of its given properties and the interacting behaviour of the machine. The fundamental requirements do not mandate specific system behaviour but demand that the behaviour exhibit certain desirable properties and achieve certain effects. These fundamental requirements therefore include usability, safety, reliability and others commonly regarded as ‘non-functional’. A view of behaviour content and structure is presented, based on the Problem Frames approach, leading to a specification in terms of concurrent behaviour instances created and controlled within a tree structure. Development method is not addressed in this short paper; nor is software architecture. For brevity, and clearer visibility of the thread of the paper’s theme, much incidental, explanatory, illustrative and detailed material is relegated to end notes. A final section summarises the claimed value of the approach in addressing the characteristic challenges of cyber-physical systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The work described is pre-formal because its desired product is a documented understanding of the system, sufficiently sound and well-structured to justify and guide the subsequent deployment of formal techniques. As von Neumann and Morgenstern wrote [3]:

    “There is no point in using exact methods where there is no clarity in the concepts and issues to which they are to be applied. Consequently the initial task is to clarify the knowledge of the matter by further careful descriptive work.”

    In addition to careful description, software development demands exploration, invention and design. These activities must be open to unexpected discoveries, and should therefore not be constrained by a priori commitment to the tightly restricted semantics of a formal language. This does not mean that pre-formal work is condemned to gratuitous vagueness. It means only that for describing each particular topic and aspect that will be encountered the appropriate semantics and appropriate scope and level of abstraction cannot be exactly determined in advance. The freedom to make these choices in an incremental, opportunistic and emergent fashion should not be hampered by premature choice of a formal language.

  2. 2.

    The stakeholders of a system are those people and organisations who have a legitimate claim to influence the design of the system behaviour. Some stakeholders—for example, the driver of a car or the wearer of a cardiac pacemaker—are themselves participants in the system behaviour. Others—for example, the representative of a regulatory body or of the company paying for the system—are not. Stakeholder purposes and desires may be formally or informally satisfiable, and may be observable in the problem world or outside it.

  3. 3.

    The word system is often used to denote only the machine executing the software. Here, instead, we always use it to denote the machine and the physical problem world together. For a cyber-physical system the execution of the software is merely a means to obtain a desired behaviour in the physical world outside the machine, and has no significance except in that role. We use the word behaviour to denote either an assemblage of processes with multiple participants or an instance of the execution of the assemblage; which is meant should be clear from the context in each case.

  4. 4.

    There are many kinds and forms of requirements. Some are constraints on budgets and delivery dates, on the composition and organisation of the development team, and other such matters of economic or social importance. Here we are concerned only with those requirements whose satisfaction is to be judged solely by the behaviours and effects of the system in operation.

  5. 5.

    A stakeholder criterion of requirement satisfaction may lie far outside the problem world: for example, the system may be required to attract a large number of new, as yet unidentified, customers in new markets. A requirement may be insufficiently exact to allow rigorous validation: for example, that the behaviour of a car should never surprise its driver. Satisfaction of such requirements must be carefully considered by the stakeholders and developers during the design work; but cannot be formally demonstrated and can be convincingly evaluated only by experience with the installed system.

  6. 6.

    The problem world of an avionics system, for example, includes the airframe, its control surfaces and undercarriage, the engines, the earth’s atmosphere, the airport runways, the aviation fuel, the pilots and other crew, the passengers, the gates for embarkation and disembarkation, other aircraft, the air traffic control system, and so on.

  7. 7.

    We regard the problem domains as given in the sense that the task of software engineering, per se, is not to develop or redesign physical artifacts, but to create software that will monitor and control their behaviour. In practice, of course, some projects may demand a degree of co-design of physical and software artifacts, and software engineers will have a central contribution to make to that work.

  8. 8.

    The given properties and behaviours of a physical problem domain are constrained by the laws of physics, by its designed or otherwise constituted form, and also by its external environment. A domain is potentially capable of exhibiting varying behaviours according to the contexts in which it may be placed.

  9. 9.

    Constraints on a domain’s potential behaviour are applied by its context. In a cyber-physical system the immediate context comprises its physical neighbours—the machine and other domains with which it interacts. A domain that does not interact directly with the machine may be constrained by causal chains involving other domains.

  10. 10.

    The system behaviour is not to be conceived or expressed as a set of stimulus-response pairs or in any other similarly fragmented form. It extends over time, and is to be understood as a whole. As Poincaré asked [4]:

    “Would a naturalist imagine that he had an adequate knowledge of the elephant if he had never studied the animal except through a microscope?”

    “It is the same in mathematics. When the logician has resolved each demonstration into a host of elementary operations, all of them correct, he will not yet be in possession of the whole reality; that indefinable something that constitutes the unity of the demonstration will still escape him completely.”

    The disadvantages of a fragmented view of behaviour are made explicit in another paper elsewhere [5].

  11. 11.

    For example, to describe the precise layout of a road junction for a traffic control system, and the positions within it of the lights, vehicle sensors and pedestrian crossing request buttons.

  12. 12.

    For example, the physiology of a recipient of a cardiac pacemaker is crucial to the system design. So too is the physical size of a machine press operator whose safety depends on the limited arm span which prevents the operator from pressing the start button with one hand while the other hand is in the danger area.

  13. 13.

    The machine specification produced by the development approach presented here is simultaneously physical—being explicitly described in terms of its interfaces to the physical world—and abstract—because it need not necessarily correspond to a software or hardware module of the eventual implementation.

  14. 14.

    The problem world naturally presents itself to us as populated by distinct entities or domains, whereas the machine does not. The design process, briefly presented in later sections, allows decomposition of what was initially postulated to be one machine into two or more smaller machines.

  15. 15.

    In an unjustly neglected response [16] to Fred Brooks’s acclaimed talk No Silver Bullet, Wlad Turski wrote:

    “There are two fundamental difficulties involved in dealing with non-formal domains also known as ‘the real world’:

    1. (1)

      Properties they enjoy are not necessarily expressible in any single linguistic system.

    2. (2)

      The notion of mathematical (logical) proof does not apply to them.”

    This is the salient challenge that physicality presents to dependable system design. It is absent from abstract mathematical problem worlds, such as the world of integers and the problems of finding and dealing with large primes.

  16. 16.

    Such a discipline would contribute to solving the problem characterised in an illuminating paper [17] by Brian Cantwell Smith as the relationship between the model and the world: “In the end, any adequate theory of action, and, consequently, any adequate theory of correctness, will have to take the model-world relationship into account”. A discipline of description should constitute a major topic of research in its own right, but the need has been largely ignored by the software engineering community. Some aspects are touched on informally in a 1992 paper [18] and a 1995 book [19]. Further work is in progress but is not discussed in the present paper.

  17. 17.

    For example, tolerating faults in physical equipment may demand at least two formalisations. In one, the equipment is assumed faultless, and the associated behaviour relies on that faultless functionality. In the other, the potentiality for fault is acknowledged, and the associated behaviour relies only on residual domain properties that allow faults to be detected, diagnosed, and mitigated. The two behaviours may be concurrently active, and the two—even potentially conflicting—formalisations are relied on simultaneously.

  18. 18.

    Phenomena are shared in the CSP [6] sense that more than one domain participates in the same event, or can observe the same element of a domain state. A shared event or shared mutable state is controlled by exactly one participating domain and observed by the other participants.

  19. 19.

    In the problem diagram a symbol with a dashed outline represents a symbolic, possibly informal, description. The behaviour ellipse represents a behavioural description of the system. The requirements symbol represents a description of stakeholder desires and purposes. The level of abstraction at which the subject matter is described will, of course, vary according to the context and purpose of the description.

  20. 20.

    The relationship between machine, problem world properties and system behaviour is complex. It should not be assumed that the machine design can be derived formally, or even systematically, from the other two. In particular, there may be more than one machine that can achieve a chosen behaviour in a given problem world.

  21. 21.

    As Harel and Pnueli rightly observe [7]:

    “While the design of the system and then its construction are no doubt of paramount importance (they are in fact the only things that ultimately count) they cannot be carried out without a clear understanding of the system’s intended behavior. This assertion is not one which can be easily contested, and anyone who has ever had anything to do with a complex system has felt its seriousness. A natural, comprehensive, and understandable description of the behavioral aspects of a system is a must in all stages of the system’s development cycle, and, for that matter, after it is completed too.”

  22. 22.

    The intrusion of non-formal concepts and concerns vitiates a formal demonstration. The system boundary is therefore related in its aim, though not in its realisation, to Dijkstra’s notion of program specification as a firewall. He wrote [8]:

    “The choice of functional specifications—and of the notation to write them down in—may be far from obvious, but their role is clear: it is to act as a logical ‘firewall’ between two different concerns. The one is the ‘pleasantness problem,’ i.e. the question of whether an engine meeting the specification is the engine we would like to have; the other one is the ‘correctness problem,’ i.e. the question of how to design an engine meeting the specification…. the two problems are most effectively tackled by… psychology and experimentation for the pleasantness problem and symbol manipulation for the correctness problem.”

    Dijkstra’s aim was to achieve complete formality in program specification and construction. Our aim here is to preserve a sufficient degree of formality within the system boundary to achieve dependability of system behaviour. The firewall ensures—pace Dijkstra’s dismissive characterisation of the ‘pleasantness problem’—only that what is inside is sufficiently formal: not that everything outside is informal. Some requirements are formal: for example, the requirement in an electronic purse system that money is conserved in every transaction even if the transaction fails.

  23. 23.

    All formalisation of the physical world, at the granularity relevant to most software engineering (though not, perhaps, to the engineering of experiments in particle physics) is conscious abstraction. Because the physical world, at this granularity, is not a formal system, a formal model can be only an approximation to the reality. In a formal world, after the instruction sequence

    \( ``x: = P; y: = x; x: = y'' \)

    the condition “x = P” will certainly hold. But in a robotic system, after the sequence

    \( ``x \, : = \, P; \, Arm.moveTo\left( x \right); x \, : = \, Arm.currentPosition'' \)

    the condition “x = P” may not hold. Moving the arm and sensing its position both involve state phenomena of the physical world. Movement of the arm may fail, and will certainly be imprecise; and the resulting position will be imprecisely sensed and further approximated in the machine by a floating-point number.

    Unreliability and approximation limit the dependability of any cyber-physical system [9] and the confidence that can be legitimately placed in formal demonstration. A crucial concern in the design of a critical system is achieving acceptable dependability within these limits.

  24. 24.

    The given properties of each problem domain must be investigated and explicitly described: together, they provide the {Wi} in the entailment M,{Wi} |= B. It is a mistake to elide these descriptions into a single description encompassing both the machine and the problem domains. A separate description of a domain’s given properties clearly distinguishes what the machine relies on from what it must achieve, and allows those potential properties and behaviours to be made explicit that the machine, by its behaviour, suppresses, avoids or neglects.

  25. 25.

    The system can be closed in the necessary sense by internalising external impacts on the problem domains. Suppose, for example, that domains A and B are both vulnerable to failure of a common electrical power supply P. If P is not included as a problem domain, electrical power failure in A must be formalised as a spontaneous and unpredictable internal event of A, and similarly for B. It is then impermissible to assert that power failures of A and B are coordinated, since there is no problem domain to which this co-ordination can be ascribed. Similarly, in an automotive system the driver must be included as a problem domain if the driver’s physical capabilities and expected behaviours are relied on to prove the entailment M,{Wi} |= B.

  26. 26.

    Unfortunately, in many development projects this distinction is elided, and requirements are stated as explicit direct descriptions—albeit often fragmented descriptions—of system behaviour. This is a mistake, exactly parallel to the classic mistake of specifying a program by giving a procedural description of its behaviour in execution.

  27. 27.

    For example, an avionics system must support the normal sequence of flight phases: gate departure, taxiing, take-off, climbing, cruising, and so on. A radiotherapy system must support the normal prescription and treatment protocols: prescription specification and checking, patient positioning, position adjustment, beam focusing, dose delivery, beam shutoff, and so on.

  28. 28.

    In telephone systems of the late 20th century such features as call forwarding, call blocking and voicemail proliferated. The complexity resulting from their interactions caused ever-increasing difficulty in the development of those systems, and often produced inconvenient and disagreeable surprises for users. This feature interaction problem [10, 11] became widely known: it was soon recognised as a serious problem in most realistic systems.

  29. 29.

    ‘Comprehensibly dependable’ does not imply ‘predictable’. A realistic system has problem domains—notably its human participants—that exhibit non-deterministic behaviour. In general, therefore, prediction of system behaviour is always contingent. What matters is that neither the developers nor the human participants should be surprised by unexpected occurrences of anomalous behaviour.

  30. 30.

    For example, if the main power supply fails in a passenger lift system the car is to be moved, under auxiliary power, to the nearest floor for the passengers to disembark. If the hoist cable breaks a more radical solution is necessary: the lift car is locked in the shaft to prevent free fall, and the passengers must then wait to be rescued by an engineering crew.

  31. 31.

    The development problem for a constituent behaviour is spoken of as a subproblem. Initially the constituent behaviour is considered in isolation from other behaviours, ignoring both its interactions at common problem domains and its interaction with its controlling behaviour. (Behaviour control is discussed in Sect. 8.)

  32. 32.

    The second of Descartes’s famous rules of thought [12] was:

    “Divide each problem that you examine into as many parts as you can and as you need to solve them more easily.”

    Leibniz rightly observed in response [13]:

    “This rule of Descartes is of little use as long as the art of dividing remains unexplained… By dividing his problem into unsuitable parts, the inexperienced problem-solver may increase his difficulty.”

    Any discipline that aims to master complexity by decomposition must identify and apply criteria of component simplicity.

  33. 33.

    A machine’s software structure is regular if there is no structure clash [14]. That is: the dynamic structure of the software clearly composes the dynamic structures at its interfaces to problem domains.

  34. 34.

    Reasoning about the relationship between the machine and the system behaviour is greatly complicated if the given domain properties are not constant. For example, they may vary with environmental conditions or with varying loads imposed by varying requirements on the system behaviour.

  35. 35.

    Both top-down and bottom-up design of the system behaviour are used as necessary. If—as is the case for any realistic system—no tersely explicable purpose of the whole system behaviour can be identified, bottom-up design must be used: the purpose of the whole will then emerge from the designed combination of the constituents.

  36. 36.

    The causal pattern by which the machine ensures the problem world behaviour is what Polanyi [15] calls the operational principle of a contrivance—and a system is a contrivance in his sense. Simplicity of this causal pattern is one important characteristic of a simple behaviour.

  37. 37.

    Formal verification of a specification proves the entailment M, {Wi} |= B. Some additional formal and informal verification is needed to demonstrate the quasi-entailment {Wi}, B |~ R—that is, that the requirements are satisfied. Demonstrating that the formalisation of the given problem world is sufficiently faithful to the physical reality is an entirely distinct task: it is inherently non-formal, and is typically both the hardest and the most vital.

  38. 38.

    For example, an automotive feature such as Cruise Control or Stop-Start must correspond to an identifiable part or projection of the system behaviour specification, not to a collection of stimulus-response pairs distributed among many parts of the whole specification.

  39. 39.

    It makes obvious sense to understand the components before addressing the task of their composition. Neglect of this principle is the Achilles heel of top-down decomposition and of its cousin stepwise refinement.

  40. 40.

    The third of Descartes’s famous rules of thought [12] was:

    “… to conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.”

  41. 41.

    Candidate constituent behaviours arise both in top-down decomposition, as briefly illustrated in Sect. 7, and in bottom-up development, in which candidate constituents are identified piecemeal. In both cases each candidate constituent must be analysed, and its simplicity evaluated, before it can be definitely accepted as a component in the system behaviour design.

  42. 42.

    Traditional block-structured programming establishes frame conditions for modules based on scope rules. In a cyber-physical system such frame conditions are frustrated by the connectedness of the physical problem world: behaviours interact unavoidably at physical domains that are common—directly or indirectly—to their problem worlds.

  43. 43.

    Eagerness to rush to design a software architecture is usually misplaced. One freedom that software—unlike hardware—allows to its developers is the freedom of malleability of their material. Many structural transformations are possible that can preserve chosen specification properties of the source while endowing the target with a completely new property suited to efficient implementation for program code construction and execution. Knowing that such transformations are available, developers should resist the temptation to cast behaviour specifications in the form of an architecture of software modules. The machine associated with the behaviour in each subproblem should be regarded as a projection, not a component, of the complete software.

  44. 44.

    Associated with each machine, from its expression as a problem in the pattern of Fig. 1, are the documented descriptions: M of the machine; {Wi} of the problem domains’ given properties and behaviours; and B of the system behaviour. The machine is also associated with the relevant requirements {Rj}. It is this assemblage of descriptions that define the behaviour: the machine is the designed means of realising each of its necessary instances.

  45. 45.

    This is top-down structuring. It starts from a firm conception of the function of the whole behaviour to be developed, and, level by level, identifies constituent parts that for any reason should be regarded as separate components. In a realistic cyber-physical system the proliferation of functions and features demands extensive use of bottom-up structuring, in which initially there is no firm conception of the whole behaviour: it emerges only gradually from the piecemeal identification and combination of constituents. Bottom-up structuring is briefly discussed later, in Sect. 8.

  46. 46.

    For example, because there is a structure clash [14]: the process structures of BX and BY are incompatible, and the simplicity criterion that stipulates regular process structure cannot be satisfied in a single undecomposed behaviour B0.

  47. 47.

    It may seem paradoxical—or, at least, inconsistent—to promote a designed domain, which was merely a local data structure in the software of a machine, as a legitimate problem domain on all fours with the physical domains Domain X and Domain Y. But of course the unpromoted local variable was physically realised in the store of the machine MB0. Its promotion merely makes visible and explicit what was previously hidden and implicit. From the point of view of MBX and MBY it is a problem domain, external to those machines, to be respectively controlled and monitored.

  48. 48.

    A designed domain, once identified in a proposed or existing system, raises many important questions about its purpose, use and realisation. Between which behaviours does the domain provide communication? Of which behaviour’s machine is the domain a local variable? Can the domain be instantiated more than once? How long does each instance persist? By which behaviours are the values of the domain state initialised and mutated? The reader may wish to ponder these questions for the examples mentioned in the text. Consider, for instance, the road layout domain in the traffic system. It is a designed domain for the traffic control behaviour. In which other behaviour is it a designed domain? Of which machine is it a local variable? Considering these questions can identify important large-scale concerns in system design. For example: a database associated with the operating parameters and constraints of a chemical process plant or a power station can be regarded as a designed domain. Safety demands that update access to this database must be explicitly controlled by the machine of which it is a promoted local variable. Apparent absence of such a machine from the behaviour specification indicates a severe safety exposure.

  49. 49.

    A model is an artifact providing information about its subject. We may distinguish analogic from symbolic models. A symbolic model—for example, a set of equations or a state transition diagram—is entirely abstract. The notational expression of a symbolic model itself carries no information about the subject: essentially, the model is simply a description that allows formal reasoning in the hope of revealing or proving some implied property or behaviour of its subject. An analogic model—for example, a system of water pipes demonstrating the flow of electricity in a circuit—is a physical object whose physical characteristics are analogues of those of the subject: water flow is analogous to electric current, pipe cross-section to the inverse of electrical resistance, a tank to a battery, and so on.

    Often, a software model such as a database or an assemblage of objects is an analogic model of its subject. Each subject entity is analogous to a certain type of record or object; relationships between entities are analogous to pointers or record keys, and so on. The motivation for an analogic model is clear: the model is a surrogate, immediately available to the software, for historical or current aspects of the subject that are not readily accessible to direct inspection.

    The danger of an analogic model is, of course, confusion of properties peculiar to the model with those belonging also—albeit by analogy—to the subject. Breaking a water pipe causes water to spill out; but breaking a wire in an electric circuit causes no analogous effect. A well-known example of such confusion in software engineering is the common uncertainty about the meaning of a null value in a cell of a relational database table.

  50. 50.

    In the word processing system, for example, the document designed domain communicates information between the editing behaviour and other behaviours—storage, printing, transformation, and others—in which the document participates.

  51. 51.

    The behaviour control diagram shows only the parent-child relationship. The dynamic rules and patterns of instantiations are not shown in the diagram but only in the specification or program text of the controlling machine. Although designed domains appear in a behaviour control diagram, their associations with individual behaviours by membership of their problem worlds are not represented.

  52. 52.

    Where a problem domain is populated by multiple individual entities there will be behaviours whose instantiations must be specialised in this way. In a library system, for example, a loan behaviour must be specialised to the borrowed book and the borrowing member.

  53. 53.

    Instances of distinct behaviours, and distinct instances of the same behaviour, suitably specialised, may be temporally related by concurrency or in any other way governed by the controlling behaviour.

  54. 54.

    A designed halt may occur when the goal of the behaviour been attained or has become unattainable. The associated failure condition is within the envisaged results of execution, and must be clearly distinguished from a failure of the assumed environment conditions—which, by definition, is not addressed within the behaviour’s own design.

  55. 55.

    Pre-emptive abortion is typically needed only in emergency conditions. In a lift system, for example, the normal lift service behaviour must be pre-emptively aborted if the hoist cable breaks; in an automotive system, the cruise control behaviour must be aborted if a crash impact is detected. Abortion is, of course, not represented as a behaviour state in Fig. 4. Pre-emptive abortion destroys the behaviour instance, which therefore no longer has any state.

  56. 56.

    An orderly stop of lift service might take two forms. The fast form brings the lift car to the nearest floor to allow passengers to disembark because the normal power supply has failed and the lift is moving under emergency power; the slower form brings the car to the ground floor under normal power to allow lift service to be suspended without inconveniencing users.

  57. 57.

    This refinement process is imaginary because formal refinement cannot be a reliable technique in a non-formal world: the more concrete models may vitiate unacceptable or impractical simplifications in their more abstract predecessors. For example, in Fig. 2 the interposition of the designed domain may introduce sources of latency or error that were implicitly excluded in behaviour B0. When development has been completed it may be possible to retrofit the complexities of the concrete reality to an elaborated abstraction; but this exercise would belong to ex post facto rationalisation and formal verification, not to development method.

  58. 58.

    In the absence of an identified and broadly understood abstract goal behaviour that comprehensibly includes all its constituent behaviours, the overall behaviour must emerge eventually from work on the constituent behaviours at lower levels. No starting point for a refinement process can be identified, because nothing definitive can be said of the overall behaviour while it has not yet emerged.

  59. 59.

    The bottom-up construction of the behaviour tree is progressive only in the sense that constituent behaviours are gradually pieced together as their individual designs and interactions become progressively clearer. In general, the intermediate products of the construction process will constitute a forest rather than a tree. It is too optimistic to conceive of this forest as an ordered structure, similar to a layered hierarchy to be built up in successive layers from the bottom upwards.

  60. 60.

    An obvious possible extension is a third view. The problem diagrams show the relationships between machines and problem domains at the level of each constituent behaviour; the behaviour control diagram shows the relationships among machines. A third view would show the relationships among the problem domains induced by their interfaces of shared phenomena, including interfaces to the machines. The form and representation of such an extension is a topic of further work.

  61. 61.

    The structure of environment conditions assumed by the subproblems naturally follows the structure of the behaviour control tree and the activation choices of controlling behaviours. The environment conditions of a controlled behaviour imply those of its controlling behaviour. The environment conditions assumed by the machine at the tree root are those of the system’s complete operating envelope.

References

  1. Jackson, M.: Problem Frames: Analysing and Structuring Software Development Problems. Addison-Wesley, Boston (2001)

    Google Scholar 

  2. O’Halloran, C.: Nose-Gear velocity—a challenge problem for software safety. In: Proceedings of System Safety Society Australian Chapter Meeting (2014)

    Google Scholar 

  3. von Neumann, J., Morgenstern, O.: The Theory of Games and Economic Behaviour. Princeton University Press, Princeton (1944)

    MATH  Google Scholar 

  4. Poincaré, H.: Science et Methode; Flammarion 1908; tr Francis Maitland, p. 126, Nelson 1914, Dover 1952, 2003

    Google Scholar 

  5. Jackson, M.: Topsy-Turvy requirements. In: Seyff, N., Koziolek, A. (eds.) Modelling and Quality in Requirements Engineering: Essays Dedicated to Martin Glinz on the Occasion of His 60th Birthday. Verlagshaus Monsenstein und Vannerdat, Muenster (2012)

    Google Scholar 

  6. Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall International, Upper Saddle River (1985)

    MATH  Google Scholar 

  7. Harel, D., Pnueli, A.: On the development of reactive systems. In: Apt, K.R. (ed.) Logics and Models of Concurrent Systems, pp. 477–498. Springer, New York (1985)

    Chapter  Google Scholar 

  8. Dijkstra, E.W.: On the cruelty of really teaching computer science. Commun. ACM 32(12), 1398–1414 (1989). (With responses from David Parnas, W L Scherlis, M H van Emden, Jacques Cohen, R W Hamming, Richard M Karp and Terry Winograd, and a reply from Dijkstra)

    Google Scholar 

  9. Smith, B.C.: The limits of correctness. In: Prepared for the Symposium on Unintentional Nuclear War; Fifth Congress of the International Physicians for the Prevention of Nuclear War 1985, Budapest, Hungary, 28 June–1 July. ACM SIGCAS Computers and Society, vol. 14,15, Issue 1,2,3,4, pp. 18–26, January 1985

    Google Scholar 

  10. Zave, P.: FAQ Sheet on Feature Interaction. AT&T (1999). http://www.research.att.com/~pamela/faq.html

  11. Calder, M., Magill, E. (eds.): Feature Interactions in Telecommunications and Software Systems VI. IOS Press, Amsterdam (2000)

    Google Scholar 

  12. Descartes, R.: Discourse on Method, Part II; Works, vol. VI (1637)

    Google Scholar 

  13. Leibnitz, G.W.: Philosophical Writings (Die Philosophischen Schriften), Gerhardt, C.I. (ed.), vol. IV, p. 331 (1857–1890)

    Google Scholar 

  14. Jackson, M.A.: Principles of Program Design. Academic Press, Orlando (1975)

    Google Scholar 

  15. Polanyi, M.: Personal Knowledge: Towards a Post-Critical Philosophy. Routledge and Kegan Paul, London (1958). (University of Chicago Press, 1974)

    Google Scholar 

  16. Turski, W.M.: And no philosopher’s stone either. In: Kugler, H.J. (ed.) Proceedings of the IFIP Congress. World Computer Congress, Dublin (1986)

    Google Scholar 

  17. Smith, B.C.: The limits of correctness. In: Prepared for the Symposium on Unintentional Nuclear War, Fifth Congress of the International Physicians for the Prevention of Nuclear War, Budapest, Hungary, 28 June–1 July (1985)

    Google Scholar 

  18. Jackson, M., Zave, P.: Domain descriptions. In: Proceedings of IEEE International Symposium on Requirements Engineering, January 1993, pp. 56–64. IEEE CS Press (1992)

    Google Scholar 

  19. Jackson, M.: Software Requirements & Specifications: A Lexicon of Practice, Principles, and Prejudices. Addison Wesley/ACM, New York (1995)

    Google Scholar 

Download references

Acknowledgments

Thanks are due to the anonymous reviewer of an earlier draft of this paper for a number of helpful suggestions. The approach described owes much to extended discussions over many years with colleagues and friends, among whom Anthony Hall and Daniel Jackson have been especially patient, encouraging, and insightful.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Jackson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Jackson, M. (2015). Behaviours as Design Components of Cyber-Physical Systems. In: Meyer, B., Nordio, M. (eds) Software Engineering. LASER LASER 2013 2014. Lecture Notes in Computer Science(), vol 8987. Springer, Cham. https://doi.org/10.1007/978-3-319-28406-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-28406-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-28405-7

  • Online ISBN: 978-3-319-28406-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics