Keywords

Introduction

“Open” is en vogue. In the course of the general digital transformation and the digitalization of learning and teaching, the open education movement has developed dynamically in recent years (Kerres, 2019) – not only because of the recent Covid-19 pandemic and ad hoc shift toward online learning. However, the idea of “open education” goes back further and is linked in particular to the appearance of open education (“open learning”) in the 1960s to reach so-called nontraditional target groups. This has also been the raison d’être of Open Universities, which have always used media because distance learning and teaching are made possible by them in the first place (Tait, 2008; Xiao, 2018).

Open Educational Resources (OER) are a central element of open education practices (Zawacki-Richter et al., 2020). The term OER was mentioned for the first time in the UNESCO Declaration (2002), referring to “the open provision of educational resources, enabled by information and communication technologies, for consultation, use and adaptation by a community of users for non-commercial purposes” (p. 24). Around 2006/2007, a definition of OER was still being negotiated. In their report on the OER movement to the William and Flora Hewlett Foundation Atkins, Brown and Hammond (2007) provided a widely received definition:

OER are teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use or re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge. (p. 4)

In the UNESCO (2019) Recommendation, OER is defined as “learning, teaching and research materials in any format and medium that reside in the public domain or are under the copyright that have been released under an open license, that permit no-cost access, re-use, re-purpose, adaptation and redistribution by others” (see Section I, No. 1). These definitions show the wide scope of OER. Jung, Sasaki, and Latchem (2016) speak in this context of the “granularity” of learning materials: “OER range from entire courses and massive open online courses to small-scale learning materials, games, simulations, quizzes and other digital resources” (p. 10). Essentially, OER are about making teaching and learning materials available for unrestricted use. Digital materials are particularly suitable for this because they can be copied, shared, and changed as often as desired, without loss and with virtually no spatial restrictions. In his blog, Wiley (2013) describes this openness of use with five Rs: the right to retain, reuse, revise, remix, and redistribute open content. Therefore, OER might have the potential to facilitate the development of collaborative and participatory learning arrangements (Otto, 2020).

The development and distribution of OER are seen as an important element in the UNESCO (2019) Recommendation toward open and inclusive knowledge societies and the achievement of the UN 2030 Agenda. The implementation of OER contributes especially to the achievement of the Sustainable Development Goal (SDG) 4, quality education. However, despite the many initiatives to develop and sustain OER materials and repositories, Jung et al. (2016) conclude that “the take-up of OER has fallen short of expectations” (p. 1). The reason for the low adoption rate is due to the potential users’ uncertainty over the quality and appropriateness of the content. The report of the Open Educational Quality Initiative (OPAL, 2011) already identified the “lack of quality or fitness of OER” (p. 8) as an important barrier to the use of OER. Since then, after a review of international approaches to the evaluation of learning materials, Zawacki-Richter and Mayrberger (2017) conclude that no quality assurance procedure or instrument has become widely accepted and used.

The UNESCO (2019) Recommendation on OER also refers to the importance of quality assurance. Member states are encouraged “to develop and integrate a quality assurance mechanism for OER into the existing quality assurance strategies for teaching and learning materials” (Areas of Action, ii Developing supportive policy) and consider “developing and adapting existing evidence-based standards, benchmarks and related criteria for the quality assurance of OER” (Areas of Action, iii Encouraging effective, inclusive and equitable access to quality OER). The great importance of quality assurance of OER is largely undisputed. For example, Camilleri, Ehlers, and Pawlowski (2014) state that “The need for quality assurance mechanisms to support the development and sustainable use of Open Educational Resources (OER) is being raised in the literature and in European and national policy documents as a major challenge and opportunity” (p. 6).

Therefore, the aim of this chapter is to explore potential quality dimensions of digital learning materials and to propose a model and an instrument for the evaluation and quality assessment of OER. We will begin with an overview of international approaches to quality assurance systems for OER in which such an instrument could be applied.

An International Perspective on Quality Assurance Systems for OER

An international comparison between OER digital infrastructures in higher education was conducted within the German research project “Digital educational architectures – Open learning (educational) resources in disseminated learning infrastructures” (EduArc; https://uol.de/coer/research-projects/projects/eduarc). This international comparison covered the macro (national and regional context), meso (institutional context), and micro levels (teaching and learning) across ten countries (Australia, Canada, China, Germany, Japan, South Africa, South Korea, Spain, Turkey, and the United States). One of the main issues addressed in this comparative case study was the quality of OER, especially referring to national standards for the creation, dissemination, and quality assurance of OER at the macro level (Marín et al., 2020). In the meso level, the aim was to observe the development of institutional measures for the creation, dissemination, and quality assurance of OER across countries (Marín et al., under review). Finally, the micro level addressed the level of awareness of faculty members with regard to institutional procedures related to OER quality assurance and to the responsible people of these procedures across countries (Marín et al., 2022). In this section, a synthesis of the three levels studied in the context of the EduArc project (higher education) is presented, along with some insights into other educational stages.

National and Regional Guidelines and Actors

Marín et al. (2020) addressed the influence of country-specific contexts on the development of national standards for the creation, dissemination, and quality assurance of OER in higher education and provided an overview of the different countries involved in the case study. The authors showed through their description that the level of political structure centralization had some effects on the quality issue for OER and their repositories across countries, but not in a uniform way across all of them.

For instance, the case of China, with a highly centralized political structure, shows this influence on OER quality. The Ministry of Education issued Technical Specifications for Modern Distance Education Resources Construction in May 2000. As the authors highlight, “this non-mandatory standard focuses on the guidelines for resource developers, production requirements, and functions of the management system” (Marín et al., 2020, p. 250). In addition, the Chinese e-Learning Technology Standardization Committee has already developed several national and association standards related to educational digitalization, including the consideration of OER.

On the other hand, most of the countries investigated in the EduArc project did not experience a relevant impact connected to the political structure centralization. These other countries did not have any official national quality frameworks or standards linked concretely to OER. Overall, quality assurance of OER has been more connected to the meso level, to the higher education institutions (e.g., South Africa) or, even more often, to individual faculty members (e.g., Japan). What can be highlighted at the national and regional levels is the existence of checklists or evaluation guides related to OER in some of the countries studied. This is the case in Spain, where a working group on institutional repositories (including OER repositories) within the Network of Spanish University Libraries (REBIUN) actively develops different documentation to evaluate the status of Spanish OER repositories and guide their evaluation. Also, the South Korean governmental organizations connected to the development of Korean Open Course Ware (KOCW) and K-MOOCs have developed different documents to, on one side, ensure a good quality of OER and provide best practices and, on the other side, to help guide KOCW and K-MOOC development. A third example is Australia, where different OER guidelines have been developed to assist higher education institutions in making informed decisions for OER adoption, for instance, the Feasibility Protocol (Bossu, Brown, & Bull, 2014a).

The actors involved in OER quality at this macro level depend on the country but usually include governments, agencies, librarians, and other working groups (Marín et al., 2020). Actors related to governments and agencies usually also cover other educational stages beyond higher education. For example, in China and South Korea, the main actors deeply involved in OER quality are public agencies. In contrast, in Spain, apart from the working group mentioned above for higher education, an association for standardization endorsed by the Spanish government has developed some standards for digital educational resources across educational stages through the Learning Object Metadata-LOM-ES and quality dimensions: technological effectiveness, effectiveness regarding accessibility, and pedagogical effectiveness (INTEF, red.es, & Spanish Autonomous Communities, 2010; Fernández-Pampillón Cesteros, 2017). On the other hand, the United States is a unique case, “since many digital education organisations are involved in defining quality for (O)ER, such as Quality Matters or the Online Learning Consortium, Educause, the Association for the Advancement of Computing in Education and the Association for Educational Communications and Technology” (Marín et al., 2020, p. 250).

Institutional Guidelines and Actors

As regards quality assurance of OER at the institutional level, three different models could be distinguished (Marín et al., under review, pp. 5–6):

  1. a)

    Institutional cases in countries with (binding) top-down institutional quality assurance mechanisms for OER, derived from national regulations (China, South Korea, and Turkey). For instance, all inter-institutional platforms in China have their quality assurance mechanisms that derive from rules and regulations of the Ministry of Education, which supervises the quality assurance of the “top-quality courses” projects. Similarly, South Korea follows a top-down approach, where the Center for Teaching and Learning of each university is responsible for ensuring OER quality at the institutional level and for following national guidelines. Turkey also adopted a top-down approach: the top management of the higher education institutions is responsible for institutional OER quality assurance, according to national policies.

  2. b)

    Institutional cases with their own independent institutional guidelines for OER quality assurance mechanisms (Canada, Japan, Spain). For instance, University H’s (anonymized large public university in Japan) Center for Open Education uses a set of key performance indicators related to well-established instructional design strategies for online courses for creating and implementing OCW and other OER. In Spain, higher education institutions supporting the development of OER have institutional quality assurance mechanisms and guides to support faculty in this endeavor (e.g., the Universidad Carlos III of Madrid).

  3. c)

    Institutional cases with basically no institutional OER quality assurance processes, which are left up to the individuals (Australia, Germany, South Africa; bottom-up approach). For example, in Australia, there are no quality assurance processes or frameworks related to OER in higher education institutions (Stagg et al., 2018); quality assurance of OER is mostly up to individual members of faculty (academic self-assurance). Similarly, South Africa has no institutional quality assurance processes for OER, and the responsibility also lies with the academic author, following the “pride-of-authorship” model (Hodgkinson-Williams et al., 2013). In Germany, quality assurance of OER in higher education most often does not rely on institutional guidelines; however, an exception is the province-based platform Hamburg Open Online University, which has quality assurance in place for offerings under its auspices (top-down approach).

Faculty Perceptions About OER Quality Assurance for Teaching and Learning

Marín et al. (2022, pp. 11–12) explored academics’ awareness and perceptions of quality of OER, and of the institutional quality assurance agents involved in OER, as well as the academics’ involvement as quality assurance agents in OER at the teaching and learning level in seven countries (Australia, Canada, Germany, South Africa, South Korea, Spain, and Turkey). In many of the countries, the perceptions of quality of OER referred to a common prejudice against OER as being of low quality. This is especially the case in Turkey, where openness and OER-related concepts were linked to free sources with low quality. In South Africa, lecturers were concerned about using OER by authors whose reputations are in doubt or not yet established (Madiba, 2018). The poor quality of OER available and the concerns regarding the quality of content stored in OER repositories are common challenges related to faculty perceptions in the literature (Bates, Loddington, Manuel, & Oppenheim, 2007; Bossu, Brown, & Bull, 2014b; Mtebe & Raisamo, 2014).

A low awareness along with a lack of frameworks regarding the quality of OER and their infrastructures was highlighted in most of the countries of the study, in line with previous literature (e.g., Baas, Admiraal, & van den Berg, 2019). For instance, in South Korea, the lack of mechanisms to ensure the quality of OCW is a challenge for the active adoption of OCW (Lee & Kim, 2015).

In terms of academics’ awareness regarding agents responsible for OER quality assurance, the outlook is also rather bleak but provides some insights into common actors. For instance, in Spain and Germany, faculty awareness about this issue was low, but the influence of IT services for the institutional learning management system (LMS) was perceived by faculty members as relevant in Germany. In the universities of both countries, academic staff who used OER were the key actors in defining the quality of OER, of OER metadata, and of OER repositories. This faculty involvement and responsibility of OER quality was present in other countries too (e.g., Japan, Turkey). In other countries’ institutions (e.g., at the Australian Queensland University of Technology), the library played a key role in OER development through an optional stage of Quality Assurance (QA; Stevens, Bradbury, & Hutley, 2017).

Toward a Quality Model and an Assessment Instrument for OER

The results of the international comparison study described above imply that the perceived low or unclear quality might be a major barrier to the uptake and wide adoption of OER by teachers and faculty members. An abundance of learning materials is freely available on various platforms and repositories, but the selection of high-quality materials remains a challenge.

Against this background, during the development of the Hamburg Open Online University (HOOU, https://www.hoou.de) portal, a study was commissioned to collect an international inventory of instruments and quality criteria for learning materials and OER and to develop a model and an instrument for quality assurance of OER. The model was informed by Almendro and Silveira (2018) who noted that the quality of OER has pedagogical, content, and technical dimensions.

OER Quality Model

The first step in the research project for the HOOU was a search for evaluation instruments for the assessment of OER (Zawacki-Richter & Mayrberger, 2017). Eight different instruments or rubrics with 161 quality criteria were identified. Based on a qualitative analysis of the quality dimensions and criteria, a framework of OER quality was proposed by Mayrberger, Zawacki-Richter, and Müskens (2018) with two broad quality dimensions – the pedagogical and technical dimension, and four subdimensions, i.e., content, instructional design, accessibility, and usability, covering a set of 15 quality criteria (see Fig. 1). In contrast to Almendro and Silveira (2018), the content dimension was integrated into the pedagogical dimension on the same level with instructional design as both subdimensions depend on each other.

Fig. 1
figure 1

OER quality model proposed by Mayrberger et al. (2018, p. 29)

Table 1 provides an overview of the 15 quality criteria in the OER quality model.

Table 1 Quality criteria in the OER quality model (Mayrberger et al., 2018)

The Instrument for Quality Assurance of OER (IQOER)

The study by Zawacki-Richter and Mayrberger (2017) showed that existing OER quality assessment instruments differ in complexity and depth of detail. Little is known about the reliability and validity of the instruments. Some instruments are based on a quality model with several quality dimensions to which a number of quality criteria are assigned; others consist only of lists of criteria. Some instruments involve a detailed scoring guide for the operationalization of the rating scales (e.g., the LORI instrument by Nesbit, Belfer, & Leacock, 2007), while others consist of simple checklists (see also Yuan & Recker, 2015). It is also worth mentioning the Learning Evaluation Object Platform (LOEP), developed as an integrated platform for learning object evaluation, which facilitates the collaborative evaluation of educational resources (Gordillo, Barra, & Quemada, 2015).

Responding to this need of a validated and reliable quality assessment instrument for OER, such an instrument was developed in the EducArc project (see above) based on the OER quality model by Mayrberger et al. (2018), namely, the Instrument for Quality Assurance of OER (IQOER). The IQOER has two versions: a shorter one using classification scales and a longer version using mean scales based on individual items.

For the short version, a five-level classification scale was operationalized for each of the 15 quality criteria. Table 2 shows the rating scale for “academic foundation” of OER content. The classification scale allows the ranking on one of five levels, which are marked with colors from red (lowest level) to dark green (highest level). The red, light green, and dark green levels of the rating scales are all described by several statements (descriptors). The intermediate second and fourth levels are not described, so it is up to the raters to interpolate the content of these levels from the other levels.

Table 2 Classification scale “academic foundation” of OER content

The assessment of quality criteria using this kind of classification scale is associated with two problems from the point of view of measurement theory. First, if a characteristic is only determined using a single rating, split-half reliability or internal consistency cannot be determined because there is no other measure to correlate with. Secondly, and more importantly, a classification scale forces a joint evaluation of possibly incompatible statements. Each classification scale (Table 2) consists of several statements that do not necessarily have to be equally true for a particular learning material. For example, a material may well have cited bibliographic sources, but the reasoning within the resource may not be coherent. In such a case, the rater in the example from Table 2 is faced with the difficulty of deciding whether the dark green alternative is true. Ultimately, the rater is forced to weigh the different statements arbitrarily and make a rating accordingly.

An alternative to classification scales is to average across scores from different individual items. Such items consist of a single statement for which the rater expresses agreement or disagreement on a multipoint Likert scale. Such ratings using Likert scales require raters to make only simple judgments of clear statements about an OER. The long version of the IQOER consists of mean scales, each aggregating five to six individual items. Table 3 shows the scale “academic foundation” of the long version of the IQOERs. In this case, the scale is formed by the mean of the item ratings. The alternative “does not apply at all” is coded as 1 and “fully applies” as 5, and the alternatives in between are coded as 2–4. Items with opposing content (e.g., item 2 in Table 3) are recoded.

Table 3 “Academic foundation” scale using individual items based on Mayrberger et al. (2018, p. 35)

However, the best and most robust quality assessment instrument will not lead to higher acceptance and wider dissemination of OER if it is not integrated into a systematic quality assurance process. These aspects are addressed in the following section.

Implementing a Process of Quality Assurance for OER

A functioning quality assurance requires not only quality models and instruments for quality assessment but also the development of a quality assurance process. For example the UNESCO and Commonwealth of Learning (2011) demand: “Recognize the important role of educational resources within internal quality assurance processes. This should include establishing and maintaining a rigorous internal process for validating the quality of educational materials prior to their publication as OER” (p. 7).

However, authors and providers of OER platforms face the challenge in how quality assurance can be designed within the process of developing OER. Even if a comprehensive quality assurance model including corresponding scales is available, the following questions arise regarding the workflow of quality assurance:

  • At what point in the process of creating or using an OER should the quality assessment take place? (Time)

  • What is the purpose of the quality assessment? (Objective)

  • Who assesses the resources by means of the criteria or scales? (Rater)

  • In what way do the quality assurance institution and the authors/developers of the OER work together? (Degree of interaction)

Approaches to quality assurance of conventional, non-free learning materials (e.g., textbooks) cannot always be easily transferred to OER. In this context, Camilleri et al. (2014) explain the differences between the quality assurance of OER and that of non-OER due to the life cycle of OERs, which also includes the possibilities of reuse and adaptation:

…the traditional lifecycle of a resource, particularly with respect to the processes of creation, editing, evaluation and use, is significantly disrupted. Whereas before these steps were traditionally distinct, consecutive and managed by various actors, the freedom granted by OER leads to a blurring of these boundaries. The involvement of many more actors in each step, therefore, means a federation of responsibility for each step, which in turn can lead to cross-over in the functions and timing of processes, as well as sub-cycles (such as several rounds of editing and evaluation). (p. 4)

Aim of Quality Assessment

While anglophone OER platforms such as MERLOT (www.merlot.org) already contain thousands of resources, the stock of OER in other languages that are suitable for higher education usage is currently still very limited. From the perspective of many providers of recently established OER platforms, the aim of quality assessment is therefore to support authors and developers in the creation of OER rather than to make selections of submitted resources.

A quality assessment by users after publication of the resource often helps to inform potential users. In such a crowd rating, the published OER can be continuously rated by users by means of scales. The averaged results of the ratings are constantly updated and presented. In summary, three essential goals of quality assessment can be outlined by means of standards, criteria, and scales: to support the creation and development of the resource, to select the resources or to check minimum standards, and to inform the users.

Time of Quality Assessment

The quality assessment can occur at various moments within the following 11-step process of the OER “life-cycle” described by Camilleri et al. (2014).

  1. 1.

    Creation of the resource by an author/creator

  2. 2.

    Description of the resource by means of metadata

  3. 3.

    Approval by the commissioning body of the resource

  4. 4.

    Publication of the resource, making it available to the wider public

  5. 5.

    Discovery, the process by which a user finds the published resource

  6. 6.

    Evaluation or checking of the fitness for purpose of the discovered resource

  7. 7.

    Resolution, where a handle is used as a precursor to obtaining it

  8. 8.

    Obtaining the resource usually by downloading it or streaming

  9. 9.

    Re-purpose and re-use: the resource may be edited and/or changed by the tutor using the resource

  10. 10.

    Integration, which describes the process of including it into a larger learning experience (such as a course), or as part of a technical tool such as a virtual learning environment

  11. 11.

    Use, which describes the actual utilization of the resource to enable a learning experience by the end user/student (p. 15)

Quality assessment by means of criteria, standards, or scales can take place at various points within this process:

  • Content-related criteria can be applied even before the resource is created, for example, when concepts are evaluated by experts within the scope of project funding.

  • During the development process, the standards and criteria can support the authors and developers in aligning their work with the development goals. Formative evaluation can be used to check the level of progress achieved and to define work steps that still need to be done.

  • Immediately before publication, a peer review process can ensure the quality of the content of the resource. Instructional designers and technical experts can check compliance with minimum standards.

  • After publication in repositories, the resources can be assessed by users toward the standards, criteria, or scales. A distinction should be made between evaluation by lecturers as indirect users and by learners as end users.

Raters

Depending on the aim and time of the quality assessment, different groups of people can be considered as raters (i.e., for carrying out the assessment):

  • The authors or creators themselves can use standards, criteria, or scales during the development of the OER to identify remaining work steps (self-evaluation).

  • In a summative evaluation, subject matter experts can assess the content quality of the resources in a peer review (cf. UNESCO and Commonwealth of Learning, 2011). The assessment can be done either before publication on a platform or with regard to a concept outline before the development starts.

  • Specialists from the fields of instructional design or technology can check compliance with minimum technical or pedagogical standards before publishing an OER. However, the development can also be monitored and supported by a formative evaluation regarding these standards.

  • An evaluation by users usually takes place after publication of the resource. The use of more subjective rating scales (e.g., how motivating or interesting the resource is perceived) can provide other users with usage information that goes beyond objective assessment standards. Camilleri et al. (2014) call this form of evaluation by users “social ranking” (p. 24).

Level of Interaction

There are usually two parties involved in an OER quality assessment: the institution that initiates the quality assessment (QA agency, often the provider of an OER portal) and those who create or develop the resource. The level of interaction between these two parties can vary greatly.

  • Level 0: No interaction: The QA agency assesses OER without knowledge of the authors or creators or does not provide information on the standards, criteria, or scales used.

  • Level 1: Information: The QA agency provides information about the criteria, standards, or scales without concrete advice on how to achieve these criteria.

  • Level 2: Instruction: The QA facility gives concrete instructions on how to achieve the criteria/standards or how to optimize the quality of the resources. The instructions are of a general nature, so they can be applied to a wide range of resources.

  • Level 3: Counselling: The QA organization provides individual counselling to the authors or creators before and/or during the creation of the resource on appropriate ways and activities to optimize the quality of the resource.

  • Level 4: Cooperation: The QA agency provides templates, tools, etc. that facilitate the creation of quality-assured resources or is actively involved in the technical development of the resource itself.

Standards, criteria, or scales are used for all different levels of interaction. While these are only used as an assessment tool in the “no interaction” and “information” scenarios, they form the basis for the “instruction” and “counselling” scenarios. In the “cooperation” scenario, the quality assurance agency develops templates and tools itself based on the standards, criteria, and scales, which are used in the creation of the resources.

OER and OEP

For educational institutions, the use of OER is often only one element on their way to adopting Open Educational Practices (OEP), and an open learning architecture (cf. Camilleri et al., 2014). Ehlers (2011) describes OEP as the following process: “[Using OEP] builds on OER and moves on to the development of concepts of how OER can be used, reused, shared, and adapted [, and] goes beyond access into open learning architectures, and seeks ways to use OER to transform learning” (p. 3). Tillinghast (2020), who speaks of “OER-enabled pedagogy” (p. 168), describes the example of a teacher who creates a chapter of an OER textbook for a course that she thinks is missing. Another example would be the creation of OERs by the learners themselves.

In OEP, in addition to the quality of OER, the quality of the courses in which OER are used, or the quality of the open learning architecture as a whole, moves into focus. Here, quality assurance focuses more on the OEPs and less on the OERs used. Thus, Brückner (2018, p. 60) calls for an “alternative perspective on quality of OER.” She advocates for an enhanced understanding of quality that also takes special features of OERs such as their free accessibility and changeability into account. An essential aspect here is to involve the stakeholders involved in quality assurance at every stage of the development process and use of OERs.

Quality Standards and Quality Culture

In summary, the use of standards, criteria, and scales to capture the quality of OERs is by no means a one-time measurement of quality. Rather, standards, criteria, and scales represent the starting points for a complex development and revision process involving different actors and stakeholders. In this process, the quality-assuring agency is often not an independent observer but rather an active co-creator of quality. Also, the selection of resources fulfilling minimum standards is often not in the foreground of this process but rather the active accompaniment and support of the development of OER and the achievement of the highest possible quality under the given conditions.

Eventually, the aim of such a quality assurance process is to establish a “quality culture” (UNESCO and Commonwealth of Learning, 2011) for teaching and learning with OER.

Conclusion

OER can promote wider access to and collaboration on educational materials and thus contribute to the UN Agenda for Quality Education (SDG 4). However, it should be noted that worldwide dissemination and application in the practice of teaching and learning is still limited, even though there are many countrywide initiatives to support the creation of OER and build corresponding infrastructures. These depend strongly on the nature of the respective education system.

The low usage rate of OER is often linked to the question of quality. There is an unmanageable variety of OER materials and repositories, so that teachers are confused when choosing materials. Interestingly, there has been no widely used instrument for evaluating OER that has been systematically developed and meets scientific quality criteria. This was the starting point for the IQOER instrument described here.

However, the mere existence of such an instrument is not enough. It must be integrated into a quality assurance process agreed with all stakeholders. When implementing a quality assessment instrument in a quality assurance system, the process must be designed and communicated in such a way that it meets with the greatest possible acceptance on the part of the teachers and faculty members.

Finally, the culture of teaching and learning must change toward an Open Educational Practice in which it becomes a matter of course that high-quality learning materials are created, shared, and further developed together. Only then can we expect OER to be widely disseminated, even beyond the Anglo-American sphere. This would be very desirable, because the need for free learning materials is great in many countries. Especially during the Covid-19 pandemic, many teachers created digital learning materials with great effort in so-called emergency remote teaching and in the period thereafter. It would be a pity if these materials would not be shared and developed further in the future.

Cross-References