Keywords

Introduction

This chapter continues to explore the governing-evaluation-knowledge interactions in Swedish higher education, now in the context of media exposure, display, and management. This chapter analyses the highly debated national quality assurance and evaluation (EQA) system in operation from 2011 to 2014 that was described in the previous chapter, with a focus on how the actual evaluations performed as a result of this EQA system were mediated and displayed “in the public eye”. Today, media reporting on national evaluations and quality assurance are high stakes for the parties involved. For the higher education institutions (HEIs), attracting future students and a favourable “branding” not only depends on the actual outcome of the evaluation but also, importantly enough, on how the outcomes are reported and represented by the media and thereby transmitted to the wider public and to different stakeholders. Media display is also important for the evaluation agencies, and media coverage is in itself displayed as a sign of both policy and agency “success”.

As described in the chapter “National Evaluation Systems”, this has not always been the case. Since the introduction of the per-student state grant system, the incentives for HEI branding and PR activities as well as the potential risk of unfavourable media exposure have risen considerably. In this context, the 2011–2014 EQA system can be said to have raised the stakes even more. It included both sanctions and rewards. Each HEI and programme was given a grade. The top performers received extra funding, and the lowest grade could result in a revoked licence to issue degrees. In addition, the evaluations of study programmes and the resulting grades were to be comparable across universities, as a form of “customer information” for prospective students. To aid in this ambition, the evaluation reports were standardised, made publicly available, and intended to speak to a wide audience. Taken together, the design of the 2011–2014 EQA system seems to go well with the “media logic” (see below), in which winners and losers, the potential for scandal, and the issuing of rewards and sanctions tend to be prioritised angles of reporting.

The aim of this chapter is to analyse how evaluation results were communicated to and via the media, by studying media communication and the display of national EQA, in the form of quality evaluations of higher education study programmes in operation from 2011 to 2014. The following questions are addressed:

  • How were the evaluations communicated to the media, and how did the regional media portray the HEIs in the context of national quality evaluations?

  • Did the media coverage reinforce the representations attempted by the responsible agency and the HEIs, or were these images challenged?

  • How can the media-quality assurance relationships be understood in light of the two questions above, and what are the implications for governing?

Next, some conceptual tools and frames are introduced along with some brief notes on the empirical data used in the particular study reported in this chapter. This is followed, first, by a mapping of the attempted framing of the evaluation results made by the two national evaluation agencies (the Swedish Higher Education Authority and the National Agency for Higher Education, with the first replacing the second in 2013), by analysing their press releases from 2011 to 2016. Second, two particular quality evaluations are focused upon, namely, education and specialist nursing, to highlight how four HEIs’ attempted framings were (re)presented by the media. The chapter concludes with a discussion of these findings by pointing to the interdependence and possible reinforcement of the media-quality assurance relationship and by arguing that these have important implications for education governing.

Approaching the Media-Quality Assurance Relationship

This chapter focuses on some of the intersections of the “media society” and the “audit society” in the form of public agency evaluations of HEIs and seeks to explore some of the dual dependencies that these relationships entail and fortify, arguing that they constitute interesting tensions for further exploration from the perspective of governing. On the one hand, the media provide certain interpretative frames by conditioning “rules of the game,” by allowing certain voices to speak and silencing others, and by operating according to a particular format and logic of communication (Hjarvard 2013). At the same time, media reporting depends on receiving certain information and material from perceived credible and legitimate sources, such as agencies and universities (cf. Fredriksson and Pallas 2016; Thorbjørnsrud 2015; Rönnberg et al. 2013).

The study reported in this chapter draws on literature related to different dimensions of media-education governing interactions, including (a) governance and governing work (Clarke 2015; Bell et al. 2010; Newman and Clarke 2009); (b) literature conceptualising the relationship between media, society, and policy/politics/bureaucracy (Christensen and Gornitzka 2018; Crow and Lawlor 2016; Thorbjørnsrud 2015; Hjarvard 2013; Strömbäck 2008); and (c) literature more specifically targeting the media in the context of education as a policy field (Rawolle 2010; Thomas 2009; Anderson 2007; Gewirtz et al. 2004).

In the contemporary “audit society” (Power 1999), scrutiny, evaluation, and control are prominent means of governing institutions, organisations, and professionals (Dahler-Larsen 2011; Grek and Lindgren 2015). The field of higher education is no exception. Under the umbrella of New Public Management, different evaluative activities are linked to and promoted by developments entailing the increased marketisation and privatisation of public welfare, in both Europe and beyond. Within such an agenda, higher education is increasingly conceived as a form of private good (Englund 1996), which positions students as consumers and quality evaluations as means by which to assist, account, regulate, and even fortify these relationships. As expressed by Tomlinson (2017):

[t]he comparative dimension of universities’ performance in the form of league tables and information sets is seen as crucial in information [of] student “choice”. Consumerism is portrayed as part of an increasingly subservient and defensive institutional climate that reflects a largely reactive position of professional accountability to external stakeholders’ demands for transparent forms of provision that meet instant gratification needs. (Tomlinson 2017, p. 454)

Such “needs” are reinforced, shaped, and channelled by media reporting and media displays of evaluation information, not least when the evaluation information can be ranked and league tabled in a format fitting the “media logic” (Altheide and Snow 1979). Media outlets thus actively contribute to constructions of meaning (Thomas 2009; cf. Dahler-Larsen 2012 on constitutive effects). In a way, both the media and public agencies that rank and evaluate HEI performance can be argued to be aligned with a public mission to scrutinise – they are often perceived as defenders and enhancers of the public interest. But they also have fundamental differences. The national agency is an integral part of the state and its bureaucracy, in the form of public administration. News media are not bound by such formal or legislated obligations.

Logics of Appropriateness

In the language of normative institutionalism, there is a certain logic of appropriateness in operation: of taken-for-granted routines and operations defining action and relations (March and Olsen 1989, 2006), “collections of interrelated rules and routines that define appropriate action in terms of relations between roles and situations” (Peters 1999). The “media logic” (Altheide and Snow 1979) is signified by providing fast and easy access, readership-friendly accounts, etc. The media logic tends to favour “unambiguity, episodic frames (…) to focus on conflicts and has a prevailing negative bias (…) designating roles of heroes, victims and villains” (Thorbjørnsrud 2015, p. 181).

The media intersects – and perhaps collides – with bureaucratic ideals or visions. Such bureaucratic virtues are often conceptualised in terms of, for instance, impartiality, correctness, neutrality, or adhering to regulations. These key terms describing bureaucratic ideals may not be easily aligned with the media logic, at first glance (Thorbjørnsrud 2015). In this chapter, some of the instances when media logic and bureaucratic public agency work intersect will be pinpointed and examined through the evaluations performed within the 2011–2014 EQA.

The Public Agency-Media Relationship

At the outset and from the agency perspective, at least three strategies or responses can be identified for the agency/HEI-(news) media relationship: to accommodate, to be proactive, and/or to be protective (cf. Table 1):

Table 1 Public administration and news media interactions: strategies and stances

These three strategies, along with their compatibility with the media logic of appropriateness versus the bureaucratic ideal, will first be discussed in relation to the attempted framings and media communications by the agency and the selected HEIs (see below). Secondly, they will underpin the analysis of HEI-media interactions in the context of results from two national subject area evaluations performed by the responsible national agency within the framework of the 2011–2014 EQA system.

Cases and Empirical Sources

The study focuses on a selection of HEIs. These cases have been selected as a part of the research project to represent different overall outcomes in the national evaluations (in terms of share of study programmes judged as “inadequate”, cf. Ericson 2014) and different institutional characteristics (university versus university college; old established institution or younger). This multiple case study (Stake 2006) thereby came to include four HEIs that displayed variety in terms of age, size, specialisation, and geographical location and were characterised by different contextual conditions as well as different outcomes to the national quality evaluations (see also Table 1 in the chapter “Enacting a National Reform Interval in Times of Uncertainty: Evaluation Gluttony Among the Willing”).

In brief, Orion is a large old university with several faculties and subject areas that overall did well in the evaluations on an aggregate level, Hercules is an old and specialised university with one faculty and mainly professional programmes that did not do as well, Virgo is a comparably recently established university college with mainly professional programmes that did well in the national evaluations, and Pegasus is a comparably recently established university with both professional and academic programmes and courses that overall did not see much success in the national evaluations.

This chapter is based on a range of different empirical sources: first, press releases and evaluation reports from the responsible national evaluation agency (SHEA) and its predecessor (SNAHE) and second, press releases and information from the four HEIs’ webpages, such as communication policies and records/archives of press releases. Some press releases were hard to retrieve, as they date back several years, and the websites may not be updated. For some of these instances, a report from Academic Rights Watch (2015) – a foundation instigated to safeguard academic freedom and rights – has been valuable for tracing and obtaining press releases from the four studied HEIs. Third, media articles from the Swedish media database, Mediearkivet, were analysed, using search terms such as the HEI’s name, the evaluation agency’s name, and the subject area. To ensure confidentiality for the participants in the HEIs, the newspapers are not explicitly mentioned or listed as references. These searches did not provide an overall picture of the media debate in general but were focused on the particular HEIs and the selected subject areas. This means that the more general and national debate going on at the time (cf. chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting”) is not included as a result of the particular selection criteria employed in this media study. Finally, the chapter also uses data collected within the wider research project that were analysed to provide additional contextual understanding of the four HEIs, including interviews with vice chancellors and senior management at the faculty level at the four HEIs.

The data has been ordered by thematic coding. For the press releases, the themes were derived from the main attempted “pitch”, i.e. if the texts were mainly highlighting positive aspects or negative aspects, or if they were mixed press releases (with both top and low performers highlighted by the agency) or neutral press releases, with no value statements or attempted anglings present. In the primarily local media reportings, the thematic coding targeted how the media angling was related to the HEI press release, who was allowed to speak (by quotes in the articles), and in what role and context (for instance, as shamed or blamed or as heroes/winners) (cf. Ekecrantz and Olsson 1991).

Before turning to the empirical study, the next section briefly provides some additional background and information on the processes and rationale of the national agency quality evaluations within the 2011–2014 EQA system.

The Policy Context and the 2011–2014 EQA System

In general, Sweden has followed the same kind of overall reform that other EU and OECD countries have undergone. The keywords of these reform efforts include, for instance, efficiency, transparency, customer orientation, and accountability, with an intensified focus on comparisons, data, indicators, and reinforcement of external evaluation. Drawing on Karlsson et al. (2014), the following main traits can be highlighted in the recent Swedish higher education policy context. The first are aspects relating to the HEIs and their relationship with the external environment. Reforms have been launched that can be expressed in terms of marketisation and competition, including an intensified hunt for resources. Even if attending Swedish higher education is free of charge, HEIs receive public funding based on the number of students they enrol. In addition, the introduction of tuition fees for non-EU citizens in 2011 marked a break with the former non-fee-paying system of Swedish HE. There is also an intensified emphasis on ranking and league tables, aimed at steering and guiding the potential “customer”, in the wake of these general developments (see, for instance, the development of a Web-based tool to be used for comparisons between HEIs and with regard to their quality, as assessed in the 2016 EQA system) (SHEA 2018).

Secondly, reforms have also targeted what happens inside the HEIs, by reforming university management and organisation. Overall, several recent Swedish higher-education policies are aimed at HEIs taking a more active, self-governing role, which is assumed to lead to increased efficiency and improved outcomes. The so-called autonomy reform (Government Bill 2009/2010:149) is one example of the striving to make HEIs more independent and self-governing in some respects, such as with regard to internal organisation and hiring and promotion of staff. In these reform efforts, several performance-monitoring measures have been instigated by the state, such as audits, indicators, and intensified external evaluation. Bibliometric follow-up of research and the 2011–2014 EQA system are just two examples. As we saw in the previous chapter, the latter has also contributed to sparking a wider debate about trust and accountability in Swedish HE (see also Karlsson et al. 2014; cf. Kettis and Lindberg-Sand 2013).

As described in the chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting”, Sweden introduced a highly debated evaluation framework for assessing quality in higher education in 2011 that was explicitly focused on results and student outcomes: the 2011–2014 EQA system. Implementing this framework led to the Swedish agency being excluded from the European Association for Quality Assurance in Higher Education (ENQA), of which Sweden originally had been one of the founding members (see chapter “Europe in Sweden” and Segerholm and Hult 2015). Among other things, the Swedish EQA system at the time did not consider HEIs’ internal quality-assurance procedures. It only focused on student results and outcomes, which the ENQA criticised, and turned out to be an infected and decisive issue in the Swedish policy discourse. In addition, the fast-paced “shortcut” policy process and detailed instructions set up by the ministry were an additional and important part of the criticism, targeting both content and the process by which it was designed and approved (cf. chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting”).

The 2011–2014 EQA system targeted study programmes that could lead to the award of a first- or second-cycle qualification and assessed the extent to which the students’ learning outcomes corresponded to the intended learning outcome: the main assessment point was students’ independent projects (called final degree projects). The evaluation resulted in a final overall grade of very high, high, or inadequate quality. The lowest grade meant a follow-up by the agency, with the possibility of a revoked entitlement to award degree qualifications (SHEA 2014a). Extra funding was given to top performers (highest grade). The evaluations in full, and not only the final grade, were then made public on the agency’s website, most often followed by a press release from the agency.

The media searches and the data collection in this chapter target the 2011–2014 EQA system as well as the period 2014–2016, when SHEA was preparing for a new EQA system but continued to carry out follow-ups, which we label a “reform interval” (see the chapter “Enacting a National Reform Interval in Times of Uncertainty: Evaluation Gluttony Among the Willing”). To specify, no additional or new evaluations of study programmes were undertaken during this period, but the HEIs that had failed and received the inadequate quality grade were reassessed to see if they could now be passed or not.

Evaluation Agency Framing of the Evaluation Results

As stated in the introduction to this chapter, media display is indeed high stakes for HEIs. But media reporting is important to the evaluation agencies as well. This is visible in the SHEA’s annual report of its activities, directed to the government and the general public:

Our work receives much external attention and interest. This is obvious, not least, by the fact that media reporting on SHEA had a three-fold increase compared to last year. The agency’s work is important and contributes to improving Swedish Higher Education. (SHEA 2015, p. 4)

As the above quote illustrates, media coverage is in itself displayed as a sign of policy and agency success and is linked to the mission to improve HE. As Gewirtz et al. note, “(c)ertain policies require the demonstration of progress and success and (…) this in itself becomes an intrinsic feature of the policies” (Gewirtz et al. 2004, p. 327). Given the importance of the media and display, and also as a means of actually making the policy “work”, the next section focuses on the evaluation agencies’ media communication by analysing their press releases.

Relevant press releases issued by the responsible agencies (the SNAHE and then the SHEA) were collected for the years 2011–2016 (N = 36) on the topic of evaluations of study programmes. As previously mentioned, the analysis sought to identify their framing by categorising them as mainly positive, negative, mixed, or neutral (Table 2).

Table 2 SNAHE and SHEA evaluation results press releases, 2011–2016 (N = 36)

In the positive category, many releases were about follow-ups conducted one year after a programme had been judged as inadequate. Very few programmes were ever revoked of their rights to issue degrees. The positive evaluation results – since almost all of the programmes passed the follow-ups – also function as an implicit justification of the agency (compare to bureaucratic branding), as “now” there has been “a significant increase in quality”, as the headline may say, i.e. the agency and its evaluation work matter, and they raise quality. The analysis of agency press releases thereby illustrates how a favourable bureaucratic branding strategy is discernible within this format of directed media communications. This can be interpreted within the performative dimension of agency reputation management (Christensen and Gornitzka 2018), in which the agency is delivering outcomes relating to its core mission.

The analysis also identified mixed press releases, which highlighted both negative and positive aspects. These releases often begin by describing what is lacking and how many and what HEIs failed or were found inadequate, but the releases also mention top performers and those who got the very high quality grade, which received an extra government grant – as a way to single out the “heroes”.

Looking at the negative press releases, they are in themselves attractive to the media logic; “failures”, etc. give leeway to feelings of sensation and even scandal, victimised students, and evil, underperforming HEIs. The midwife training press release can serve as an example: “Midwife training programmes are inadequate” headlined the press release, but in fact only 3 programmes were judged as inadequate, and 15 passed with high quality. The neutral press releases were the most common framing. These releases merely state that “a new evaluation is finished” and list the results, without any quotes, opinions, or statements from agency representatives.

Media Coverage of Two Subject Areas

In the coming sections, some results on two subject areas are presented in more detail, in the particular context of the four studied HEIs. The first area, education, had a mixed press release, and the second, specialist nursing, had a negative one. Both categories can be said to connect well with the media, based on what the media logic would amplify and value (negative angles, villains, deviations rather than success stories, etc.). Furthermore, these subject areas concern professional degrees and are large programmes that are important for meeting the demand for qualified staff in the whole of Sweden, making them courses and programmes of particular public and political interest.

Education

The education subject area comprised programmes in education, specified as also including didactics, educational leadership, and psychotherapy. The SHEA launched a “mixed” press release:

One third of all programs, 19 out of 57, got the grade inadequate quality. These programs are offered in 12 out of the total 24 HEIs assessed in this evaluation. The shortcomings are often connected to lacking scientific quality in students’ independent degree projects” (…) Five programs are judged as very high quality and will receive an extra quality grant from the government, and 33 programs are of high quality. (SHEA 2014b)

The table below summarises the HEIs’ results in the evaluation, how they presented the evaluation results themselves, and the (mainly regional) media coverage (Table 3).

Table 3 Subject area: Education

In Orion’s case, the “inadequate” grade was not mentioned in the HEI press release, but it was picked up by the student union journal. Other than that, there was little media attention. Hercules chose to communicate and highlight its very high grade (not the high grade), but there was not much media response and attention. Virgo had one evaluated programme that got the grade “high quality”, which was also highlighted in a press release. This was followed by a “kind” and smooth follow-up and exposure in a local newspaper that even “advertised” how students would go about applying to this successful programme (which did not even receive the highest grade), by stating the deadlines for applications, who to contact, etc. Pegasus had mixed results, and the regional paper chose to highlight the good grade in its headline. It also allowed Pegasus representatives to elaborate and assure the readers that it was working to improve.

Specialist Nursing

The results from the evaluation of specialist nursing and care led the SHEA to launch quite a negative press release:

The SHEA has finalised a comprehensive evaluation of programs in specialist nursing and care. Seventy-nine out of 134 programs are judged as having inadequate quality, and their right to reward degree qualifications is now questioned. This is a serious situation, and the HEIs of course need to rectify this (…), says University Chancellor Harriet Wallberg. (SHEA 2014c)

This particular evaluation made it to the national news and was distributed by the main national news agency TT. It then travelled to other regional media outlets. Table 4 below shows how the HEIs communicated the evaluation results, alongside whether and how the evaluations were picked up and framed by (the regional) news media.

Table 4 Subject area: Specialist nursing

As displayed in the table, Orion attempted a framing, but the regional paper went for the national news agency’s formulations. Hercules had very bad results and did some damage control attempts in its releases, by being open and receptive and stating it is working hard to improve. This made it to the national media. Virgo did well and took advantage of it. The “hero story” it launched successfully came through in the subsequent media coverage. Pegasus had three regional outlets that covered the story. Two did their own articles, while one went for the national TT news agency text. The two articles written by local journalists let Pegasus representatives explain and expand that they were working hard to improve.

Media Communication, Agencies, and Evaluation

The analysis in this chapter showed that the agencies’ press releases varied in their “pitch” and that they are displaying signs of bureaucratic branding, such as in the follow-up press releases from the national evaluation agencies (compare to performative reputation management, Christensen and Gornitzka 2018). The negative releases also showed some tendencies to scandalise, as illustrated by the midwife evaluation press release. Press releases from the HEIs also varied in their attempted angling, unsurprisingly attempting to frame the issue in a favourable and constructive light but not always. There was noticeable variation in this regard.

Throughout the study, the evaluation agencies (the SNAHE and the SHEA) appeared to be conceived as an untouchable form for check-up. They are perceived as credible, reliant, and good sources of non-biased information that the media is depending on. This finding is similar to a study on another Swedish agency, the Swedish Schools Inspectorate (Rönnberg et al. 2013). The study reported in this chapter highlighted how these central and perceived credible state actors chose to frame their press releases and communicate with the media. These attempted framings easily flow through and are not generally subjected to additional media scrutiny. The mission of independent media scrutiny was not highly pronounced in the analysed material, and the media seldom reprocessed evaluation results data or highlighted alternative ways of interpreting them. This gives leeway to both successful pitching to the press and for opening up for bureaucratic branding activities.

Bureaucratic Logic of Appropriateness in Agency Branding

In balancing and adapting the bureaucratic logic to the format of the media logic, the agencies and, to some extent, the HEIs were seemingly successful in how they met and interacted with the (regional) media. The findings point to some instances where favourable bureaucratic branding of the HEI was amplified by supportive local media reporting, without the media asking for additional sources and/or information, such as in the Virgo hero story and, even if less pronounced, in the Pegasus case. But the pattern is not clear-cut and may depend on the HEIs’ different roles and academic profiles. Virgo and Pegasus are both regional universities with important ties to the local and regional area.

The findings also point to an adapted form of packaging and to selling “bureaucratic” information, which make the mediation and translation from the agency/HEI to the media particularly smooth. The bureaucratic values and ideals function as an important “currency” in the packaging and selling of information via media. In fact, the bureaucratic logic of appropriateness is a part of the bureaucratic branding. It is a good media pitch, as it is perceived as credible and linked to perceptions enhancing its legitimacy. This is all managed via well-skilled communication offices in both the agencies and the HEIs. Based on these findings, we may be witnessing a “professionalisation” of public agency-media relations, including bureaucratic branding activities (cf. Christensen and Gornitzka 2018). Like the spread of managerialism, this study points to the important role played by a professionalised cadre of “bureau-branding” employees at universities and public agencies that are working to design, manage, and steer external and internal communications. These employees are professionally “pitching the press” under the legitimising shield of the bureaucratic logic.

What Happened to Critical Debate?

An important silence concerns the absence of critical debate on the evaluation framework’s validity. Very little of the criticism and debate about the 2011–2014 EQA system (see the chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting”) was visible in this study. However, the fact that the empirical material does not cover the overall media discourse in general but rather has pinpointed certain HEIs and two evaluations in particular needs to be reiterated. Even so, this material demonstrated very few traces of the intense critical debate on the 2011–2014 EQA system. This debate seems to have been effectively silenced in the reporting on evaluation results when the system actually was in operation. The framework that produces these results is questioned in what appears to be “a parallel debate” – not intersecting with the reporting on these evaluations’ actual results.

Journalists do not use the angle of the political debate and the criticised evaluation system design in these reports. The interviewed HEI representatives do not bring it up either, according to how they are cited in the analysed media materials. The HEIs and their representatives are not attempting to pitch the criticism of the system as a possible defence and/or counter argument when responding to individual programme evaluations’ results. Largely, the approach taken when an HEI is judged inadequate is for HEI representatives to be submissive and to show compliance and willingness to adapt to the SHEA and improve but assume that they will rectify the situation and meet the standards, assuring people that work is already on its way – a calming message to the public. When objecting, HEI representatives may risk being judged as having something to hide, they may come across as unreceptive, and any critique can be turned against them and make the HEI look defensive, stiff, and unwilling to improve. In this way, the performative agenda is narrowing the space of what can be said.

Finally

This chapter empirically illuminated some of the processes in which high-stakes national evaluations, PR strategies from the HEIs, and the media logic meet and intertwine. It gave some empirical accounts on how processes of mediatisation are taking place in and through agencies and highlighted some important interdependencies that need to be critically discussed in relation to their possible constitutive effects (cf. Dahler-Larsen 2012). Not only is the existence of national EQA systems forming certain perceptions of what good higher education is “supposed to be” in this case, but the mediatisation of these evaluation results and central stakeholders’ navigation of these processes are also important parts of these perception formations and the potential constitutive effects of quality assurance and other forms of evaluations. The way the media uncritically presents evaluation results from the EQAs may also contribute to the image of them as unproblematic and objective ways of measuring and controlling for “quality” in higher education.

Having outlined some of the mediated representations of actual evaluations conducted within the 2011–2014 EQA system, the next chapter discusses the period when the national agency and the HEIs prepared for the new, yet not formally decided upon, 2016 EQA system that was to replace the debated 2011–2014 EQA system. We will describe and analyse how the four case HEIs – Orion, Hercules, Virgo, and Pegasus – navigated, developed, and designed their own internal quality assurance systems.