1 Introduction

Visual analytics is “the science of analytical reasoning facilitated by interactive visual interfaces” [1]. When applied to data, visual analytics techniques produce dashboards, among other visual analysis tools [2]. A dashboard is defined as a single-screen visual representation of data from several sources. Graphical displays and tables are used to display qualitative and quantitative indicators. The most relevant information on a dashboard has to be assimilated by managers in a single glance, with a view to improving the decision-making process [3, 4]. Over the last 10 years, dashboard technologies have been deployed widely in the field of healthcare [5]. Clinical dashboards enable easy access to several sources of data on a large number of patients, after aggregation and synthesis into concise, usable indicators. Furthermore, clinical dashboards are intended to provide clinicians with feedback on their practices and thus enable improvements in the quality of patient care [6, 7].

Healthcare organizations have introduced dashboards for various purposes, such as monitoring health system performance [8], reducing medication errors and thus optimizing treatment [9, 10], improving decision-making in the Emergency Department [11,12,13], reducing the incidence of infections [14,15,16] and improving the quality of care in maternity units [17]. The end users may be physicians [8, 16, 18], pharmacists [9] or nurses [5, 12]. The use of clinical dashboards was reportedly associated with shorter report turnaround times [18] and a lower incidence of ventilator-associated pneumonia [14].

The data collected by an anesthesia information management system (AIMS) are mainly used to improve the overall documentation of activities and procedures or to monitor specific activities (such as blood transfusion or compliance with antibiotic administration protocols) [19,20,21,22,23]. Although the importance of improving intraoperative practices (such as mechanical ventilation, blood pressure control, and anesthesia depth monitoring) has been emphasized in recent years, there are still very few reports on the use of visual analytic tools that could help anesthesiologists monitor and improve their professional practice. Nelson et al. have reported on the implementation of dashboards to support clinical consistency in medications and airway management for children receiving radiation therapy [24]. Other researchers have reported the use of anesthesia dashboards to reduce discrepancies in controlled substance documentation in the operating room [25] or as an audit tool for obstetric anesthesia and pediatric anesthesia practice [26, 27].

It is widely accepted that end users must be involved throughout the design process (from the earliest conceptual steps to the final evaluation), so that a technology is usable enough for easy, correct, safe use and thus achieves its intended clinical and/or organizational impact [28]. At Lille University Medical Center (Lille, France), more than 65,000 anesthesia procedures are performed each year by 15 specialist surgical units (e.g. heart and lung surgery, obstetrics, orthopedics, pediatric surgery, etc.). The Medical Center’s Department of Anesthesia is supported by a data warehouse [29,30,31] fed with data from the AIMS and a billing application (diagnoses, medical procedures, hospital stays, death, etc.). The data warehouse also frequently supplies data for retrospective clinical studies. In an earlier survey, we had found that our medical center’s anesthesiologists were keen to introduce dashboards for clinical research, the evaluation of professional practice, and organizational management [32]. The objective of the present study was to describe the user-centered development, implementation and preliminary evaluation of clinical dashboards dealing with unit management and quality assessment in the anesthesia units at Lille University Medical Center.

2 Materials and methods

In a first step (“end user needs”), we met potential end users and conducted semi-directive interviews to define the end goal. We used the interview material to identify and synthesize issues, and thus identify indicators (variables, measurements, and filters). Each indicator was associated with one or more dashboards. In a second step (“prototyping”), we developed a number of potential solutions by applying good visualization practice relevant to the tool under development (including simple representations, consistent layouts, labels, and date formatting) [33,34,35] and presented them to end users for appraisal. In the third and last step (“deployment and preliminary evaluation”), the dashboards were implemented and made accessible to end users for everyday use. After a period of use, user feedback was collected and analyzed.

2.1 End user needs

In order to record end user needs, two investigators conducted semi-structured interviews together. The semi-structured interviews were based on a grid that enabled the investigators to explore the issues of interest, the currently available key indicators, and the indicators’ availability, representations and limitations ("Appendix 1").

The interviews were conducted in the Department of Anesthesia between March and May 2019. The interviews were audio-recorded. We contacted physicians from all 15 anesthesia units and met the respondees or other physicians recommended by the respondees.

Similar themes and questions that had been expressed in different ways by the participants were grouped together under a single topic. The themes were prioritized by the frequency of reporting. For each theme, we checked on the availability of the corresponding data in the hospital’s data warehouse or the technical feasibility of retrieving additional data from the hospital’s main information system. Based on the clinicians’ feedback and the scientific literature, we selected key indicators for each topic.

2.2 Prototyping

Data were retrieved from an anesthesia data warehouse [29, 30] developed at Lille University Medical Center. The warehouse contains pre-operative and intraoperative data from the AIMS and post-operative data from a billing application.

The web interface was implemented using HTML, CSS and JavaScript, while PHP and Oracle were used on the server side. The visualization of the dashboards was rendered using Chart.js and D3.js libraries [36]. The application was implemented on an Apache web server [37] running on a secure, private virtual server (Windows Server 2012 R2 Datacenter Edition). The server could be accessed over the hospital’s private network after user authentication. The clinical dashboards were updated every trimester. The web application’s architecture is shown schematically in Fig. 1.

Fig. 1
figure 1

Raw data from the AIMS and billing software were integrated in a data warehouse. An aggregation step produced suitable aggregated tables for each dashboard. The application was hosted on the hospital’s web server and could be accessed over the hospital network after user authentication

Each dashboard corresponded to a single theme and presented the theme’s key indicators. To ensure that the indicators’ visual representations were well understood, we developed different versions of each indicator. The clinicians were invited to compare the versions, select their preferred version, and explain how they understood it. We selected the best-liked version of each indicator. For each dashboard, we developed several templates based on good visualization recommendations [33,34,35] with, for instance, clear labelling, consistent positioning of buttons and features across the dashboards, and with a global arrangement of the dashboard that respects the sequence of cognitive tasks involved (first select and filter at the top and on the left of the dashboard, and then read in the main part of the screen). Then, we tested different layouts, formats and styles (colors, text format, etc.) with the aim of offering a concise, precise, clear representation. Each template was submitted to the users for appraisal, and the best-liked template was selected for further use.

The data were aggregated in advance on the server-side, in order to reduce client-side calculation times and avoid the sharing of non-aggregated data. All indicators were computed beforehand for each combination of dimensions. Hence, when the application was queried, the indicators were displayed immediately.

2.3 Deployment and preliminary evaluation

Anesthesiologists who were involved in the development process were informed by e-mail of the dashboards’ availability, content, and instructions for use. Furthermore, they were asked to disseminate this information to any other potentially concerned colleagues.

Two months after the dashboards had been implemented, we interviewed (i) the anesthesiologists involved in the dashboard development and (ii) other anesthesiologists aware of the project. The interviewees were asked to use the dashboard in front of the investigator. Afterwards, we asked the interviewees to rate the dashboard’s ease of use, its accessibility, its practical suitability, and whether it matched their needs. At the end of the interview, the interviewees were asked to rate the system usability scale (SUS) score [38, 39]. The 10-item SUS questionnaire provides an overview of subjective assessments of usability. For each item, the score is rated on a Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). The final scores for items 1, 3, 5, 7, and 9 were equal to the respondee’s Likert scale score minus 1, whereas the final scores for items 2, 4, 6, 8, and 10 were equal to 5 minus the respondee’s Likert scale score. To obtain the overall SUS score, the item scores were added and then multiplied by 2.5. To obtain the overall SUS score, the item scores were added and then multiplied by 2.5; the higher the SUS score, the more usable the dashboard.

3 Results

3.1 End user needs

Of the 21 anesthesiologists invited to take part in the design process, 12 agreed and were interviewed. These anesthesiologists came from eight of the department’s anesthesia units. The participants’ level of experience also varied; 2 had been in practice for less than 5 years, 5 had been in practice for 5 to 10 years, and 5 had been in practice for over 10 years. Three of the anesthesiologists were unit managers. The mean (SD) interview duration was 45 (12) min.

We identified 17 themes and 29 related issues in the context of unit management and quality assessment. Thirty-nine indicators/metrics were then identified. Some indicators were common to several issues (e.g. the number of procedures), although the exact representation could differ from one theme to another. Each type of indicator had to be computed for several time periods (the week, the month, and the year) and for each anesthesiology unit (heart and lung surgery, obstetrics, pediatrics, etc.). These themes, questions and indicators are detailed in Table 1. Measures and sample values are also available for each indicator in Online Resource ESM1.

Table 1 A synthesis of user needs: identified themes, issues and related indicators, according to the context

The following functional requirements were also identified: sending the dashboard by e-mail, printing it (for display in the department), easy but secure access on over the hospital network, categorical variables (such as ASA status), time periods (such as a personalized date filter), and the ability to switch the indicators from a number to a percentage.

We also identified five technical requirements. The solution had to be free of charge, easily maintainable by future developers, easily updated, with user-friendly graphics, and easy to connect to the hospital information system.

3.2 Prototyping

Three of the identified themes were excluded due to a lack of data. Of the 14 remaining themes, ten were developed as dashboards; these corresponded to the themes most frequently mentioned during the interviews and whose implementation was judged to be feasible (indicated by an a in Table 1).

Each dashboard had the same layout and graph arrangement, and was composed of four sections—giving a clear, uncluttered interface. Section 1 comprised the dashboard’s title, information on data availability, and a print button. Section 2 included a filter for selecting the surgical unit. The section’s time scale could be switched to display information from the previous 3, 6 or 12 months, the whole period from 2010 onwards, or a custom date range. Section 3 displayed a summary table. Section 4 was the main part of the dashboard, and contained all the charts.

3.3 Deployment and preliminary evaluation

The dashboards went online on September 1st, 2019. Twenty end users (4 residents, 4 nurse anesthetists, and 12 anesthesiologists, including the head of the department and a unit manager) from nine anesthesia units were interviewed. Twelve users (60%) had not taken part in the development step and so were not familiar with the dashboards. Eight of the interviewees had been in practice for less than 5 years, 6 had been in practice for 5 to 10 years, and 6 had had been in practice for over 10 years.

The interviewees had a good opinion of the dashboards; which were considered to be highly usable. The mean (SD) overall SUS score was 82.6 (11.5). The results for each item are presented in Table 2.

Table 2 System usability scale scores

Overall, the dashboards were considered to be user-friendly and easy to read, and the information could be rapidly accessible. The end users considered that the dashboards constituted a good way to monitor changes in practice from an individual perspective. Moreover, the interviewees considered that dashboards be used to monitor the impact of a change in the unit’s quality improvement policy (e.g. documentation of the anesthesia procedure) and the compliance with the current guidelines (e.g. on ventilation).

Despite the very positive feedback, several opportunities for improvement were identified. Firstly, the end users wanted the home page to be more attractive. The computation of some indicators needed to be more clearly explained. Other interviewers would have liked to have more unit-specific dashboards (e.g. for the assessment of specialist surgical procedures).

Lastly, the interviewees reported that their newly acquired familiarity with dashboards was likely to prompt them to come up with more themes that could be usefully addressed by these systems.

4 Discussion

Here, we described the user-centered development, implementation and preliminary assessment of clinical dashboards in the Department of Anesthesia at Lille University Medical Center. Ten dashboards were developed and encompassed 39 indicators. The anesthesiologists who had used the dashboards for 2 months gave very positive feedback on the system, including good usability and high perceived usefulness.

One strength of the present study was the involvement of end users throughout the development process, with the objective of meeting their needs as closely as possible. Judging from the excellent SUS scores, this objective was met.

To the best of our knowledge, there are very few publication on the development of clinical dashboards for managing units and improving quality in anesthesiology [24,25,26,27]. By developing clinical dashboards for mechanical ventilation and blood pressure control, the present research is part of Lille University Medical Center’s current effort to improve intraoperative management. Specifically, in reference to current guidelines and recent literature, key indicators identified by end-users included the incidence of hypotension [40], ventilatory settings and related monitored variables (particularly driving pressure) [41], fluid administration [42] and blood transfusion [43]. Interestingly, there is significant potential for improvement in compliance with current recommendations of good practice, as they are based on relatively recent studies. Figure 2 shows that the practice of using small tidal volumes and PEEP, which is the basis of protective lung ventilation [41], has consistently increased over the past few years in our hospital. Thus, the dashboards will allow each anesthesia staff to monitor its own practice along time. Moreover, by providing data on postoperative complications, the dashboards will allow to directly assess the impact of any practice change on postoperative outcome. Finally, since the impact of practice changes likely depends on patient and/or surgical risk factors [44], our dashboards will allow for prioritization of different goals across operating rooms [for example, the primary focus will be on ventilatory endpoints in abdominal or thoracic surgery (high risk of respiratory complications), and instead on mean arterial pressure in neurosurgery (low blood pressure can promote cerebral ischemia and increase intracranial pressure)].

Fig. 2
figure 2

The ventilation management dashboard. Section 1 features the dashboard’s name, information about data availability, and print button. Section 2 offers settings for selecting the medical unit and the time period displayed. Section 3 is a summary table. Section 4 is the main part of the dashboard, with all the charts

It is important to note that the dashboard system is not embedded in the AIMS; if ever the center changes its AIMS, we would still be able to produce dashboards with data from the data warehouse. We followed a user-centered process when developing this tool, in order to guarantee usability and ensure that the information displayed did not lead to misunderstandings or interpretation errors. This approach is in line with current guidelines on developing health information technology and medical devices [45].

Although the present development process involved anesthesiologists, this tool could also be used by residents, nurses, surgeons, and healthcare managers. During the design process, the end users’ lack of knowledge about this type of technology limited the number of proposals; indeed, the clinical staff were not used to having access to aggregated data (except for the individual patient follow-up). Moreover, they were not familiar with designing graphics themselves. After the dashboards had been released, the staff became more familiar with the tool and thus were more likely to suggest new themes that could be addressed with the help of dashboards.

The next step of this project is to finalize the tool by addressing the problems encountered and reported by users (see Sect. 3.3). To this end, we will use a set of usability heuristics adapted to the visualization of dashboards [46] to evaluate and improve their usability. Besides, the acceptability of the dashboard by clinicians and units’ heads will be assessed through a questionnaire inspired by the unified theory of acceptance and use of technology [47]. In subsequent work, we intend to assess the dashboards’ medium-term acceptability with anesthetists and other healthcare professionals. After a few months of use, a further round of development might be useful for adapting the existing dashboards and generating new ones that address novel themes and issues and are applicable to other medical specialties. The integration of new data sources could also be considered. On the clinical front, we will have to evaluate the impact of dashboards on activity, practices and endpoints, such as patient outcomes [48]. In this sense, a dashboard could be considered to be an audit tool [49]; it will also be important to establish how dashboards might (i) help clinicians to become more aware of their practice and habits, (ii) change the clinicians’ representations of their activities, and (iii) assist with the decision-making process. For example, the discovery and appropriation of indicators by clinicians will help them to define targets, according to the context of their service and the recommendations of learned societies.