Skip to main content

Results of the Real-time Delphi Survey

  • Chapter
  • First Online:
Future Perspectives for Higher Education
  • 151 Accesses

Abstract

The following chapter provides an overview of the results of this Delphi survey. It starts with an explanation of how the study was conducted. The final expert panel is described before the structure of the survey is specified. The qualitative and quantitative results of the study are then examined in detail. Evaluation methods applied in this study are presented, and their selection is justified.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    An overview of the survey questionnaire can be found in the electronic supplementary material.

  2. 2.

    For the application of Likert scales in relation to dimensions I and D, see, among others Ecken et al. (2011); T. Meyer et al. (2021); Roßmann et al. (2018, p. 138).

  3. 3.

    For the use of self-perceived confidence in Delphi studies, see, among others, Beiderbeck et al. (2021b); T. Meyer et al. (2021).

  4. 4.

    For a comprehensive account of, and discussion of, the various metrics used to represent consensus in Delphi surveys, see Birko et al. (2015); von der Gracht (2012).

  5. 5.

    The original presents an SD of + /− 1.64 which accounts for 41% of the underlying 4-point scale.

  6. 6.

    The original presents 80% of opinions within a 2-point range on a 7-point scale.

  7. 7.

    In the original, different values are shown for different scales, such as, 1 point on a 7-point scale, 2 points on a 10-point scale, 1 point on a 4- or 5-point scale, and 1 point on a 3-point scale.

  8. 8.

    For an overview of the limitations of quantitative-oriented content analysis and the necessity of integrating hermeneutic elements, see Kracauer (1952).

  9. 9.

    For evaluative qualitative content analysis, see Kuckartz (2018, pp. 123–142).

  10. 10.

    Later works add Mayring and Fenzl’s (2019, p. 637) explication, as a third variant of category formation and application. Explication is also called context analysis. Here, “individual unclear text passages are made the object” (Mayring & Fenzl, 2019, p. 637).

  11. 11.

    For a comparative overview of procedures and metrics with respect to interrater reliability, see Lombard et al. (2002); Krippendorff (2018).

  12. 12.

    A detailed overview of interrater reliability analysis in this thesis is provided in the electronic supplementary material.

  13. 13.

    For a detailed description of the criticism of rankings, see Part II, Subchapter 4.5.1.

  14. 14.

    For a detailed description of the third mission of higher education institutions, see Part II, Subchapter 4.4.

  15. 15.

    Supporting the projection meant, in the context of EP, justifying a high probability of occurrence. Regarding I, this meant arguing for a high impact of the development on higher education institutions. The nonsupportive text passages should be understood as diametrically opposed to this.

  16. 16.

    For details, see Part II, Subchapter 3.2.

  17. 17.

    For details, see Part II, Subchapter 4.2.

  18. 18.

    For details on humanistic education, see Part II, Subchapter 4.2.

  19. 19.

    For an overview of societal changes and the associated need for lifelong learning, see Schmidt-Hertha et al. (2020).

  20. 20.

    According to Schmidt-Hertha (2014, p. 29), lifelong learning can be interpreted as an aim of educational policy and a paradigm of educational theory. Both comprehensions were included in the projection development.

  21. 21.

    Education cities must be distinguished from the concept of learning cities. According to Eckert and Tippelt (2017) and Tippelt et al. (2009), the latter concept includes (political) ambitions to foster lifelong learning in a decentralized manner, thus, strengthening regions, for example, by developing learning regions. The former, on the other hand, is to be understood as a concept of a physical, centralized place in the future that is established by institutions to separate themselves from virtual competition, as von der Gracht and Becker (2014) describe.

  22. 22.

    The technological applications are referred to as intelligent digital systems. This formulation was intended to stimulate the creativity of the experts and not to limit the assessments to individual technologies, such as artificial intelligence.

  23. 23.

    The quoted expert mentioned the concept of the universal system of higher education. For further details, see Part II, Subchapter 3.1.

  24. 24.

    The simplistic distinction between private and public institutions regarding funding sources is not uncontroversial. A competing concept of distinction for higher education institutions presents four new institutional types based on the aforementioned. See, for this, Marginson (2018, p. 331). However, in the context of projection development in this research, a simple distinction alongside institutions’ funding types is sufficient.

  25. 25.

    For the 10 projections, each expert assessed four EP-dimensions, one I-, and one D-dimension.

  26. 26.

    CI for experts’ control of reinforcement was the difference between the sum of values submitted for IC factors and the sum of values submitted for EC factors. It indicated the extent to which an expert possessed an internal or external control of reinforcement.

  27. 27.

    In the Kruskal-Wallis test, the Bonferroni correction was applied to ensure valid results. For the Bonferroni correction, see Benjamini and Hochberg (1995); Hochberg (1988).

  28. 28.

    For an overview of the use of R in science, see Tippmann (2015).

  29. 29.

    In the context of group analysis, the terms personal dimension and projection-related dimension can be replaced by the statistical terms independent variable (for the personal dimension) and dependent variable (for the projection-related dimension).

  30. 30.

    In total 420 individual comparisons were carried out. This total arises out of the six personal dimensions which were compared regarding seven projection-related dimensions for 10 projections.

  31. 31.

    A detailed overview of the results for the desirability bias analysis is provided in the electronic supplementary material.

  32. 32.

    See Subchapter 7.2.4, personal dimension age and projection-related dimension I.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nick Lange .

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 1547 kb)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lange, N. (2023). Results of the Real-time Delphi Survey. In: Future Perspectives for Higher Education. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-40712-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-40712-4_7

  • Published:

  • Publisher Name: Springer VS, Wiesbaden

  • Print ISBN: 978-3-658-40711-7

  • Online ISBN: 978-3-658-40712-4

  • eBook Packages: Social Science and Law (German Language)

Publish with us

Policies and ethics