Skip to main content

Systems Engineering Science

  • Living reference work entry
  • First Online:
Handbook of Systems Sciences
  • 52 Accesses

Abstract

This chapter addresses the scientific foundation of systems engineering (SE) and how it can and should support the practice of SE. Changes occurring in the nature and scope of SE are driving an expansion in scope of the science that underpins SE practice. Because of that relationship, the chapter describes the changes in SE and in science-based engineering (SBE), and then describes the SE Science (SES) resulting from the changes in nature and scope of SE. Changes in SE are presented as a contrast between classic SE and this new, emerging SE. The expansion that SE is experiencing – beyond its traditional application domains such as defense, transportation, and energy into even more complex domains such as healthcare, law, social systems, and national policy making – is discussed, along with the increasing prominence of autonomous and intelligent machine agents and other elements of change. This expansion also necessitates a corresponding expansion in supporting science, from the traditional mechanical and physical sciences into life and social sciences, as well as making clearer the need for systems and computational sciences. It is fortunate that the sciences needed for this broader scope do not have to be invented; they already exist, and just need to be more explicitly applied, leveraged, and adapted to SE. Part of the leverage discussion is how the expanded SES can be made more accessible (and translatable) to practitioners of the expanded SE discipline. The changes exhibited in emerging SE and SBE, and the agent orientation in which they are presented in the chapter are captured in a generalized agent model that is a unifying framework formalized in an appendix to the chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References

  • Ackoff R, Emery F (1972) On purposeful systems. Aldine Atherton, Chicago

    Google Scholar 

  • Al-Fuqaha A, Guizani M, Mohammadi M, Aledhari M, Ayyash M (2015) Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Commun Surv Tutor 17(4):2347–2376

    Article  Google Scholar 

  • Branke J, Mnif M, Müller-Schloer C, Prothmann H, Richter U, Rochner F, Hartmut Schmeck H (2007) Organic computing – addressing complexity by controlled self-organization. In: Proceeding of 2nd International symposium on leveraging applications of formal methods, verification and validation (isola 2006). IEEE Computer Society

    Google Scholar 

  • Cahn JM (1958) Automation in highway design. IRE Trans Ind Electron:1–3

    Google Scholar 

  • Caro R (1974) The power broker. Knopf, New York

    Google Scholar 

  • Checkland P (1999) Systems thinking, systems practice: includes a 30-year retrospective. Wiley, Chichester. isbn:0-471-98606-2

    Google Scholar 

  • Coons SA, Mann RW (1960) Computer-aided design related to the engineering design process, Technical Memorandum 8436-TM-5, MIT Electronic Systems Lab. http://images.designworldonline.com.s3.amazonaws.com/CADhistory/8436-TM-5.pdf. Accessed 31 May 2018

  • Diallo SY, Wildman W, Shults FL, Tolk A (Eds.) (2018/Forthcoming) Human simulation: perspectives, insights, and applications. Cham: Springer. Series on new approaches to the scientific study of religion.

    Google Scholar 

  • Dobson S, Sterritt R, Nixon P, Hinchey M (2010) Fulfilling the vision of autonomic computing. IEEE Computer 2010:35–41

    Article  Google Scholar 

  • Donaldson W (2017) In praise of the “Ologies”: a discussion of and framework for using soft skills to sense and influence emergent behaviors in sociotechnical systems. Syst Eng 20(5):467–478

    Article  Google Scholar 

  • Dorri A, Kanhere S, Jurdak R (2016) Multi-agent systems: a survey. DOI https://doi.org/10.1109/ACCESS.2018.2831228, IEEE Access.

  • Duranton G, Turner M (2011) The fundamental law of road congestion: evidence from US cities. Am Econ Rev 101(6):2616–2652. https://doi.org/10.1257/aer.101.6.2616

    Article  Google Scholar 

  • Eshleman A (2014) Moral responsibility. Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/moral-responsibility/. Accessed 31 May 2018

  • Etschmaier MM (2014) Purposeful systems: a conceptual framework for system design, analysis, and operation (keynote address). In: 29th international conference on computers and their applications CATA-2014, Las Vegas, March 24–26 2014

    Google Scholar 

  • Friendshuh L, Troncale L (2012) SoSPT I.: “Identifying fundamental systems processes for a General Theory of Systems (GTS)”. In: Proceedings of the 56th annual conference, ISSS, July 15–20, San Jose State University (electronic proceedings: http://journals.isss.org/index.php/proceedings56th), 23 pp.

  • Güdemann M, Nafz F, Ortmeier F, Seebach H, Reif W (2008) A specification and construction paradigm for organic computing systems. In: Proceeding of IEEE international conference on self-adaptive and self-organizing systems. IEEE Computer Society

    Google Scholar 

  • Holt J, Perry S (2013) SysML for systems engineering: a model-based approach, 2nd edn. Institution of Engineering and Technology

    Google Scholar 

  • Hybertson D, Hailegiorghis M, Griesi K, Soeder B, Rouse W (2018) Evidence-based systems engineering. Syst Eng 21(3):243–258. https://doi.org/10.1002/sys.21427

    Article  Google Scholar 

  • Hybertson D (2001) A uniform component modeling space. Informatika 25(4):475–482

    Google Scholar 

  • Hybertson D (2009) Model-oriented systems engineering science: a unifying framework for traditional and complex systems. Auerbach/CRC Press, Boca Raton

    Google Scholar 

  • Hybertson D (2011) Next generation systems engineering: expansion, foundation, unification. Paper presented at International Council on Systems Engineering (INCOSE) Symposium, June 2011

    Google Scholar 

  • INCOSE (2014) A world in motion: systems engineering vision 2025. International Council on Systems Engineering

    Google Scholar 

  • ISSS (2018) International Society for the Systems Sciences: about the ISSS. http://isss.org/world/about-the-isss. Accessed 31 May 2018

  • Kephart JO, Chess DM (2003) The vision of autonomic computing. IEEE Computer 2003:41–50

    Article  Google Scholar 

  • Klir G (2001) Facets of systems science, 2nd edn. Springer, IFSR International Series in Systems Science and Systems Engineering

    Book  Google Scholar 

  • Koch E (1996) Orphan elephants go on the rampage. New Scientist https://www.newscientist.com/article/mg15120390-300-orphan-elephants-go-on-the-rampage/. Accessed 31 May 2018

  • Li F, Li J, Zhu J (2017) An integrated trust evaluation model based on multiagent system. In: 9th IEEE international conference on communication software and networks

    Google Scholar 

  • Long D, Scott Z (2011) A primer for model-based systems engineering. Vitech Corp. http://www.ccose.org/media/upload/MBSE_Primer_2ndEdition_full_Vitech_2011.10.pdf. Accessed 1 July 2018

  • Mao W, Gratch J (2012) Modeling social causality and responsibility judgment in multi-agent interactions. J Art Intell Res 44:223–273

    Google Scholar 

  • Moysen J, García-Lozano M, Ruíz S, Giupponi L (2018) Conflict resolution in Mobile networks: a self-coordination framework based on non-dominated solutions and machine learning for data analytics. IEEE Comput Intell Mag 2018:52–64

    Article  Google Scholar 

  • Niazi M, Hussain A (2011) Agent-based computing from multi-agent systems to agent-based models: a visual survey. Scientometrics November 2011 https://doi.org/10.1007/s11192-011-0468-9 source: DBLP.

  • OMG (2017) About the OMG system modeling language specification version 1.5. Object Management Group. https://www.omg.org/spec/SysML/. Accessed 1 July 2018.

  • Regalado D, Harris S, Harper A, Eagle C, Ness J, Spasojevic B, Linn R, Sims S (2015) Gray hat hacking: the ethical Hacker's handbook, 4th edn. McGraw-Hill Education, New York

    Google Scholar 

  • Rizk Y, Awad M, Tunstel E (2018) Decision making in multi-agent systems: a survey. IEEE. https://doi.org/10.1109/TCDS.2018.2840971

  • Roth A (2017) Shared agency. Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/shared-agency. Accessed 31 May 2018

  • Samuel A (1959) Some studies in machine learning using the game of checkers. IBM J 1959:210–229

    Article  Google Scholar 

  • Schlosser M (2015) Agency. In: Zalta EN (ed) The Stanford encyclopedia of philosophy (Fall 2015 Edition). https://plato.stanford.edu/entries/agency/. Accessed 31 May 2018

  • SEBoK Authors (2017) Life cycle models. In: BKCASE editorial board. 2017. The guide to the systems engineering body of knowledge (SEBoK), v. 1.9. R.D. Adcock (EIC). The Trustees of the Stevens Institute of Technology, Hoboken. Accessed 6/11/18. www.sebokwiki.org. BKCASE is managed and maintained by the Stevens Institute of Technology Systems Engineering Research Center, the International Council on Systems Engineering, and the Institute of Electrical and Electronics Engineers Computer Society.

    Google Scholar 

  • Simmons MK (1984) Artificial intelligence for engineering design. Comput Aided Eng J 1:75–83

    Article  Google Scholar 

  • Simon H (1996) The sciences of the artificial, 3rd edn. MIT Press, Cambridge

    Google Scholar 

  • Singer J, Sillitto H, Bendz J, Chroust G, Hybertson D, Lawson H, Martin J, Martin R, Singer M, Takaku T (2012) The systems praxis framework. In: Systems and science at crossroads – sixteenth IFSR conversation’, SEA-SR-32, Institute for Systems Engineering and Automation. Johannes Kepler University, Linz, pp 89–90. http://www.ifsr.org/index.php/the-systems-praxis-framework-ifsr-conversations-2012/; brochure: http://systemspraxis.org. Accessed 31 May 2018

    Google Scholar 

  • Smiley M (2017) Collective responsibility. Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/collective-responsibility/. Accessed 31 May 2018

  • Soeken M, Haener T, Roteller M (2018) Programming quantum computers using design automation. In: Design, Automation and Test in Europe, Dresden

    Google Scholar 

  • Speck J (2012) Walkable City: how downtown can save America, one step at a time. Farrar, Straus and Giroux, New York

    Google Scholar 

  • Tolk A, Wildman WJ, Shults FL, Diallo SY (2018) Human simulation as the lingua Franca for computational social sciences and humanities: potential and pitfalls. J Cogn Cult 18(5):462–482

    Article  Google Scholar 

  • Wikipedia ABM (2018) Agent-based model. https://en.wikipedia.org/wiki/Agent-based_model. Accessed 31 May 2018

  • Wikipedia SS (2018) Systems science. https://en.wikipedia.org/wiki/Systems_science. Accessed 31 May 2018

  • Wikipedia SSM (2018) Soft systems methodology. https://en.wikipedia.org/wiki/Soft_systems_methodology. Accessed 11 June 2018

  • Yilmaz, Ören (2009) Agent-directed simulation and systems engineering. Wiley, Berlin

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Duane Hybertson .

Editor information

Editors and Affiliations

Section Editor information

Appendix A: Generalized Agent Model

Appendix A: Generalized Agent Model

A generalized agent model is described in this section to provide a common orientation for the discussion in the chapter. It is motivated by the greater variety of types of players or actors prominent in the expanded concept of SE and SBE, which reflects a diffusion of responsibility beyond the systems engineer. First, a brief orientation: Since this chapter is in the engineered systems part of the Handbook, we take the general case to be a set of agents (such as systems engineers) that produce a system, called the engineered system or target system or produced system. We use the term system in this chapter to mean target or produced system, unless otherwise noted or qualified. But it needs to be made clear that the scope of agent goes beyond humans to multiple types of actors and forces, and the scope of produced system goes beyond human-engineered systems to include natural systems produced without human involvement, as well as hybrid systems produced by a combination of humans, other intelligent agents, and natural forces. This scope manifests itself in the discussion and examples in the remainder of this chapter.

The Agent Model

The term agent is now defined and explored. The notion of agent (and the closely related concepts of agency and action) has an extensive philosophical history – see, e.g., Schlosser (2015). An agent in general is a person or thing that has the capacity to act and thereby to have an effect. In this chapter, an agent is an entity that affects an entity, where the affected entity may be the agent or another entity. More specifically, agent is a role played by an entity A that has an effect E on a system S.

The agent situation is depicted in Eq. 1: 1) {A ➔ E(S)}

An agent A has an effect E on a system S with these characteristics:

  • Agent A may have more than one effect (E1, E2…) on system S

  • Effect E may be due to multiple agents (A = A1 + A2…), called a collective agent

  • Agent A may have the same effect E on multiple systems (S1, S2…)

  • If S = A (i.e., if agent A has an effect E on itself), then A is (regarded as) a system

  • Effects E1, E2… from Agent A may be temporary, and may vary across time (T1, T2…)

  • The effect of an agent on a system depends not only on the action of the agent, but also on the state of the environment and the state of the system. Thus, the same action by the same agent in different circumstances may produce different effects.

  • An effect E is generally regarded as positive or negative, or in the range of positive to negative. However, different observers (O1, O2…) of E may regard it differently—e.g., O1 regards E as positive, O2 regards it as negative. Likewise, the same observer may regard E differently at different times – e.g., O1 regards E as negative at time T1 and positive at T2. In this latter example, E does not change but the interpretation of E by O1 changes.

  • An agent may affect what a system is (i.e., affect its formation or composition) or what a system does (i.e., its behavior or operation).

Note that agent is not an inherent role {A}, and it is not a role with respect to a given system {A➔S}. It is only a role with respect to a designated effect on a given system, as in Eq. 1.

Types of agents of interest to SE, SBE, and SES include humans such as engineers (including systems engineers) and scientists (including systems scientists); intelligent machines, and environmental factors, both natural (e.g., constraints, forces, and resources of nature) and human-induced (e.g., laws, regulations, standards, human and economic resources).

An important characteristic of agents is used in this chapter is purposefulness. An agent can be purposeful (intentional, goal-driven) or nonpurposeful; i.e., its effect on a system can be intentional or nonintentional.

How does purposeful agent relate to purposeful system? There is a body of literature on purposeful systems. Etschmaier (2014 p. 6) says: A system is purposeful if some of the relationships with the environment can be recognized as furthering an identifiable goal. Ackoff and Emery (1972 p. 31) say it is goal-seeking; purposeful individual or system is one that can produce (1) the same functional type of outcome in different structural ways in the same structural environment and (2) functionally different outcomes in the same and different structural environments; can change its goals in constant environmental conditions; it selects its goals as well as the means by which to pursue them.

The idea of purposeful systems is mentioned here to contrast it with purposeful agents that produce a system, while recognizing that a collection of agents that produce a system is itself a purposeful system. Both purposeful agents and purposeful systems are in the scope and province of SES. The discussion in this chapter focuses more on the purposefulness of agents, because that is the primary locus of change that drives the revision and expanded models of SE, SBE, and SES.

The purposeful agent distinction is now added to the above formulation of the agent situation. A preliminary depiction of purposefulness is:

  1. 2)

    Purposeful: Entity A has purpose P to achieve effect E on system S: {A ➔ P ➔ E(S)}

  2. 3)

    Non-purposeful, expressed in two alternative ways:

    1. a)

      Entity A does not have purpose P to achieve effect E on system S: {A ➔  ¬ P ➔ E(S)}

    2. b)

      Or: It is not the case that entity A has purpose P to achieve effect E on system S: ¬{A ➔ P ➔ E(S)}

However, we need to account for two facts. First fact: Regardless of purpose, by definition, if A does not have an effect E on system S, then A is not an agent of E(S). Thus, for both the purposeful and non-purposeful formulations above (Eqs. 2 and 3), Eq. 1 must also hold {A➔E(S)}. Purposeful agent A not only has purpose P of achieving E on S, A must actually achieve E on S. Nonpurposeful agent A must achieve E on S even though A had no purpose of doing so. These more complete respective definitions are expressed as:

  1. 4)

    Purposeful agent: {A ➔ P ➔ E(S)} ∧ {A ➔ E(S)}

  2. 5)

    Nonpurposeful agent:

    1. a)

      {A ➔  ¬ P ➔ E(S)} ∧ {A ➔ E(S)}

    2. b)

      ¬{A ➔ P ➔ E(S)} ∧ {A ➔ E(S)}.

The nonpurposeful agent may be one of two types:

  • The agent is incapable of having a purpose or goal (e.g., the natural environment; forces and constraints of nature; nonorganic elements).

  • The agent is capable of having a purpose or goal (e.g., humans), but has no purpose with respect to effect E on system S – although still achieves E on S.

Second fact: For most systems with any degree of complexity, including those of interest to SE and SES, the relationship between purpose and resulting effect is far from straightforward. The difference between intended and actual effect may take many forms with a variety of names: failed system; system does not satisfy requirements; unintended consequences; undesired side effects; serendipitous effects.

Using our current formulation, A has the purpose of achieving E1 on S, but does not achieve E1. A may have no effect on S, or may have unintended effect E2 on S. These outcomes are expressed as:

  1. 6)

    No effect: {A ➔ P ➔ E1(S)} ∧  ¬ {A ➔ E1(S)}

    1. a)

      Result: A is not an agent for E1 on S.

  2. 7)

    Only unintended effect E2: {A ➔ P ➔ E1(S)} ∧  ¬ {A ➔ E1(S)} ∧  ¬ {A ➔ P ➔ E2(S)} ∧ {A ➔ E2(S)}

    1. a)

      Result: A is not an agent for E1 on S, but is a nonpurposeful agent for E2 on S. It is interesting that in this case A is not a purposeful agent for either E1 or E2 on S.

Example of Eq. 7 above: Suppose a police force (A) wants (P) to reduce the number of assault weapons in the community (E1(S)) and institutes a gun buy-back program, paying $600 for each weapon. Now suppose that citizens can buy these weapons from dealers for $550 each. Instead of achieving the reduction E1, the result is unintended effect E2: a surge in gun sales, incentivized by the ability of citizens to buy weapons from dealers and sell each of them to the government for $50 profit. (This is of course an entirely fictional example.)

An interesting example of a combination of unintended outcomes expressed in Eqs. 6 and 7 is the well-established pattern of road-building for the purpose (P) of reducing traffic congestion (E1). A strong intuitive belief of city planners is that adding more lanes and building more roads will reduce congestion. Multiple studies (e.g., Duranton and Turner, 2011) have shown that this is not the result. Instead, traffic increases and congestion remains about the same. Thus, measured by degree of congestion, Eq. 6 holds; there is no change in congestion (E1). Measured by amount of traffic, Eq. 7 seems to hold; there is increased seems to traffic (E2), which is an unintended effect. This pattern was observed as early as the 1930s when Robert Moses was building roads and bridges in New York City: the more he built, the more traffic increased without reducing congestion (Caro 1974). The pattern is called induced demand (Speck 2012): increasing supply increases demand. It applies to other domains as well as road-building. The reverse pattern also holds: reduced demand. Tearing down or removing roads, even if heavily used, does not tend to increase congestion in the area (Speck 2012).

Collective Agent

Most systems are produced not by one agent, but rather by a set of agents that we call a collective agent. This holds for artificial systems, natural systems, and hybrid (combination of artificial and natural) systems. In particular, it holds for all systems of interest to SE, SBE, and SES.

First is a discussion of some collective agent properties. This is followed by some examples that illustrate collective agents.

Collective Agent Properties

  • A collective agent can be any mix of purposeful and nonpurposeful individual agents (none, some, or all are purposeful).

  • A collective agent can have degrees of coordination: explicitly working together toward a common goal. For example, an SE project has high coordination; but a system such as a tree, while the product of multiple agents, is not produced via coordination by this definition. Coordinated agency is sometimes referred to as shared agency (e.g., Roth 2017).

  • Individual agents that are part of a collective agent may not know or be aware of each other.

  • Individual agents may have differing purposes or priorities. Potential variations of purpose among the individual agents of a collective agent lead to varying degrees of cooperation, debate, competition, conflict, and subterfuge. These possibilities are obvious among purposeful agents, but they can exist to some extent even between purposeful and nonpurposeful agents. A classic example of the latter is the perpetual motion machine: The goal of a purposeful agent, the engineer, to create a perpetual motion machine conflicts with the constraints of a non-purposeful agent, the limits of energy and thermodynamics.

  • More formally, agent A1 may intend to achieve effect E1 on system S, while agent A2 may not intend to achieve E1, or may intend to achieve E2, or may intend to prevent E1. A common example is malicious or maleficent agents, discussed in the chapter. In the latter case, several outcomes may result:

    • E1 is achieved. A1 is an agent for E1 on S while A2 is not an agent for E1 on S.

    • E1 is not achieved. A1 is not an agent for E1 on S while A2 is an agent for preventing E1 on S – i.e., A2 has an effect on S by preventing E1 on S.

    • E1 is partially achieved on S. A1 is an agent regarding E1 on S by partially achieving it, while A2 is an agent regarding E1 on S by partially preventing it.

    • Outcome of tradeoffs rather than malicious agents: E1 is a goal of both A1 and A2, but with differing priorities: E1 is the top priority of A1 for S, but for A2 E1 is a lower priority than E2 for S. Suppose A1 is a performance engineer, A2 is a security engineer, E1 is system performance, E2 is system security, and there is a tradeoff between performance and security of S. A1 wants to maximize E1, while A2 wants to maximize E2. The resulting compromise means that both A1 and A2 have an impact on both E1 and E2 and thus are both agents (elements of a collective agent) regarding E1 and E2 on S.

Examples

Example 1: SE Project

A project that produces a system is a collective activity of multiple agents such as systems engineers, project managers, builders, testers, sponsors, users, etc. In addition, the environment affects the system being produced, including both facilitators and constraints, ranging from physical constraints and features to laws and regulations to a variety of resources such as funds, staff, and tools.

The SE project is a purposeful activity with a common goal of many of the agents involved – although not all. For example, the physical constraints represent agents that affect the system but are not purposeful, they are simply natural constraints.

Example 2: Oak Tree

It is produced by a collection of agents, typically all natural (as in nonhuman) agents. Perhaps a squirrel buried the acorn in the soil. The acorn itself is clearly a significant agent in producing the tree, but it cannot achieve its potential to become a tree without environmental agents that include soil nutrients, air, sunlight, water from rain, and an appropriate temperature range. But what does an oak tree have to do with SE or SES? Well, perhaps a program of tree planting is a system solution to problems in soil erosion and air quality. SE does not engineer the trees, but it can influence where, when, how many, and what types of trees are planted, based on knowledge produced by SES.

In the typical case of an oak tree, all agents can be considered non-purposeful. Although one could argue that the squirrel purposely buried the acorn, its purpose was not to grow an oak tree, but to store the acorn for later retrieval and eating. The resulting oak tree was accidental, reflecting the unintended effect of Eq. 7 above. However, decisions made to implement a tree planting program – when, where, etc. – would clearly be purposeful.

Example 3: Environmental Impact

Consider two cases: First, an industry damages the environment by building industrial plants and allowing their toxic byproducts to flow into the environment. This typically involves multiple people and organizations. They are not necessarily working together, but their effect is that of a collective agent. Second, a group of citizens including local families, environmental advocates, the legislature, etc. act to restore and sustain the environment – e.g., raising visibility of the damage, getting laws and regulations passed, finding ways to contain or neutralize the damage, and to restore the environment.

From an agent perspective, the industrial plant decision makers likely do not individually or collectively have the purpose of damaging the environment. But that is the collective effect, and they are purposely taking no action to avoid the damage. On the other hand, the advocates for restoring and sustaining the environment are a purposeful collective agent with a common goal and coordinated action.

Example 4: Hurricane

It forms, lives, and dies under natural forces and conditions involving pressure, temperature, moisture, and movement of air. Now suppose humans find a feasible way of dissipating a hurricane before it reaches land.

The life of a hurricane from beginning to end results from a collection of nonpurposeful agents, and in that sense is not in the purview of SE – although it is in the scope of SES. But if humans can find a way to end or weaken it before it reaches land, that effect is due to purposeful agents, probably in concert with leveraging nonpurposeful agents.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Hybertson, D. (2020). Systems Engineering Science. In: Metcalf, G.S., Kijima, K., Deguchi, H. (eds) Handbook of Systems Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-13-0370-8_18-1

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-0370-8_18-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-0370-8

  • Online ISBN: 978-981-13-0370-8

  • eBook Packages: Springer Reference Business and ManagementReference Module Humanities and Social SciencesReference Module Business, Economics and Social Sciences

Publish with us

Policies and ethics