Standardized assessment, information, and networking technologies (SAINTs): lessons from three decades of development and testing



To rectify the significant mismatch observed between what matters to patients and what clinicians know, our research group developed a standardized assessment, information, and networking technology (SAINT).


Controlled trials and field tests involving more than 230,000 adults identified characteristics of a successful SAINT——for primary care and community settings.


Evidence supports SAINT effectiveness when the SAINT has a simple design that provides a service to patients and explicitly engages them in an information and communication network with their clinicians. This service orientation requires that an effective SAINT deliver easily interpretable patient reports that immediately guide provider actions. For example, our SAINT tracks patient-reported confidence that they can self-manage health problems, and providers can immediately act on patients’ verbatim descriptions of what they want or need to become more health confident. This information also supports current and future resource planning, and thereby fulfills another characteristic of a successful SAINT: contributing to health care reliability. Lastly, SAINTs must manage or evade the “C-monsters,” powerful obstacles to implementation that largely revolve around control and commercialism. Responses from more than 10,000 adult patients with diabetes illustrate how a successful SAINT offers a standard and expedient guide to managing each patient’s concerns and adjusting health services to better meet the needs of any large patient population.


Technologies that evolve to include the characteristics described here will deliver more effective tools for patients, providers, payers, and policymakers and give patients control over sharing their data with those who need it in real time.

Background and methods

Health care providers have historically relied on patient statements to diagnose conditions and direct treatments. Since the advent of formal health care quality assessment in the 1960s [1], standardized patient-reported measures have become a tool for explicitly enumerating needs and documenting providers’ progress toward meeting those needs [2].

In the 1980s, our practice-based research network documented a significant mismatch between patients’ reports of their physical and emotional problems and what clinicians knew about those problems, if they knew anything at all [3, 4]. The implications of this mismatch for patient health and satisfaction with care provoked us to identify eight single-item measures of patient physical, emotional, and social function that could be used both to guide service and to monitor change [5]. The World Organization of Colleges, Academies and Academic Associations of General Practitioners/Family Physicians quickly adopted these measures, called the Dartmouth COOP Functional Assessment Charts, and translated them for worldwide use [6].

By the early 1990s, Rubenstein et al. had conducted a controlled trial to test whether a complex, multi-variable measure of patient function—the SF-36—could “be used by physicians in practice to help improve their patients' outcomes” [7, 8]. Concurrently, our research group conducted a controlled trial of the Dartmouth COOP Charts to assess the short-term effects of that approach on the process of care and patient satisfaction [9]. The SF-36 study found “no significant differences between experimental and control group patients at exit from the study on any functional status or health outcome measure” and concluded that a “more powerful intervention … is needed to help office-based internists improve patient outcomes” [8]. In the Dartmouth COOP Chart study, we found a small improvement in satisfaction with pain management, yet no significant impacts on patient or population health.

These early studies indicated the need for both explicit information that will be useful to care providers and service feedback loops between providers and patients. Over the next several decades, we therefore tackled the challenge of designing efficient feedback systems to enhance the impact of our assessment measures by alerting clinicians to patients’ self-reported needs—the necessary first step for helping them. As our assessment, information, and networking technology evolved, clinicians in our research group field-tested the various adaptations.

In a controlled study conducted in 1999, we compared responses from 832 elderly patients who merely received the self-assessment survey with responses from 819 intervention patients who received the survey in conjunction with automated need-specific instructions, and whose responses were automatically relayed to their physicians [10]. The patients in the intervention group felt their physicians were better informed of their needs and reported greater understanding of their health risks, as well as help with limitations in daily activities, emotional issues, and social support. Over the 2-year study period, eight of the 11 intervention practices improved their relative standing with regard to how their patients judged them. Only one of the 11 usual care practices showed this improvement.

Another controlled trial in 2006 tested web-based messaging between 47 physicians and 644 adult patients with pain and emotional problems [11]. The results of this study showed sustained improvement in patients’ pain and function at 6 months when our computerized system was combined with a problem-solving intervention supported by a nurse educator.

In summary, the Dartmouth COOP Charts’ simple measures of what matters became the starting point for a standardized assessment, information, and networking technology (SAINT). To date, more than 200,000 patients aged 19–69, 30,000 aged 70+, and 10,000 adolescents and children have used versions of our SAINT,, for guiding clinician action on needs that matter and improving patient health and satisfaction [12]. Thus, the following observations of the characteristics of an effective SAINT are based on decades of field tests and controlled trials and the responses of many users.

Characteristics of an effective SAINT

Easy to use: provides a service that is simple and cheap

To evaluate patient function, population health, or practice performance, and to allocate reimbursements to clinicians and health care systems, policymakers and payers have adopted many multi-item patient-reported instruments, such as the SF-36, the more recent PROMIS-29 and PROMIS-10, and many versions of CAHPs [13,14,15,16].

We designed our SAINT as a simple automated feedback system for the front lines of health care delivery, where patients and clinicians immediately co-produce a service. We emphasized single measures to improve efficiency, encourage participation, and stimulate action, and we showed that single queries of patients are both appropriate and more cost-effective as substitutes for several multi-item measures in evaluations of: practice quality (compared to CAHPs) [17], domains of patient function (compared to SF-36) [5], and patient engagement (compared to six measures for confident self-management contained in a Patient Activation Measure) [18].

Another consideration critical to the design of an effective SAINT is that clinicians have to operate on a lean business model and expect low direct and indirect costs for front-line users. SAINTs are commodities that must compete with hundreds of thousands of health care applications, and in the USA, the measurement industry is increasingly considered a source of significant health care waste, such that high pricing is not likely to be tolerated [19,20,21,22]. Fortunately, as our SAINT evolved, the internet came to provide a very inexpensive alternative to our earlier distribution methods, which had relied on scannable paper bubble forms, bar codes, and touchscreen kiosks. The web-based SAINT has allowed us to make it available to any interested health care providers at no cost through The internet has also allowed schools and municipalities to disseminate our SAINT widely without cost [12, 23,24,25].

Thus, a successful SAINT must aim to serve, not just survey. Our SAINT was developed for people aged two and older and includes tools that support general problem-solving and decision-making, as well as special versions for homebound patients and those in the hospital [26, 27]. Variations of the SAINT have also been developed to comply with different types of regulatory requirements, including those of the Center for Medicare Services and state Medicaid authorities, as detailed at

Guides action: reduces clinician guesswork about what matters to patients

Measurement systems designed principally for retrospective population analyses are of little use to health care providers, who need prospective guidance for individual patients. Therefore, a service-oriented SAINT must enable timely, easily interpretable patient reporting that guides action.

As more and more people used our SAINT, health confidence emerged as one of the measures that mattered most to most patients [28], and exemplified a measure that could guide action for providers, patients, and community services. Health confidence is a single-item measure of overlapping concepts of self-management capacity, engagement, self-efficacy, and activation [29,30,31]. People who are designated as health-activated, or those who simply report confidence that they understand and can manage most of their health concerns, use fewer costly health care services [18, 31,32,33]. When a practice routinely measures and responds to health confidence, costly care use seems to decrease [34, 35], and many other patient-reported outcomes such as healthy eating and risk reduction are associated with health confidence [31].

Health confidence is also a good indicator of effective communication between patients and clinicians. For example, after adjusting for baseline characteristics, more than two thirds of patients who became more confident over time also reported that their clinicians were aware of and provided good education about emotional problems [36]. Another example showed a strong correlation between the health confidence of patients with asthma, diabetes, heart disease, high cholesterol, or hypertension and the extent to which clinicians allowed these patients time to ask questions, encouraged their involvement in decision-making, and explained care in language that was easy to understand [18].

Health confidence is undermined by pain and emotional problems [36]. With this knowledge, we investigated the possibility that a few measures, including health confidence, pain and emotional problems, and perceptions of adverse medication effects, might be more clinically useful than algorithm-based predictions generated from administrative data. We found that only five measures, called the What Matters Index (WMI), could together forecast future costly care, immediately guide care for most patients, and were a suitable proxy for patient quality of life [37,38,39]. Table 1 lists the WMI questions and possible actions a medical assistant might take in response to patient answers to each measure.

Table 1 The What Matters Index and recommended actions based on responses

Although our SAINT includes health confidence and the WMI in a more comprehensive platform that connects patients, providers, and community-based support services, a SAINT could also be effective using only health confidence or the WMI. A very brief SAINT could be offered on paper or a handheld device as a population-based screener to guide care and serve as a gateway for further inquiry. Figure 1 illustrates the logic and flow of WMI screening in our current version of

Fig. 1

The What Matters Index as an effective SAINT: an immediate guide for care to reduce risk for costly emergency or hospital use

Guides resources: contributes to health care reliability and resource planning

Absent adequate preparation, merely knowing what matters to each patient is not a guarantee that a small office practice or a larger health system will have exactly the resources most patients want and need exactly when and how the patients want and need them. Resource planning, as this preparatory activity is often called, has been the subject of decades of health services research. For example, Wagner identified several essential properties of successful health systems, founded on the understanding that effective chronic care management requires productive interactions between engaged patients and prepared, proactive providers [40,41,42].

Building on that work, our research group described how clinical microsystems can apply the model’s principles at the practice level. Our work showed that effective clinical microsystems must allocate resources based on the measurement and analysis of what matters to patients, always aiming to maximize the productivity of each patient interaction—no matter how brief—with planned, proactive care [43]. Thus, a WMI-based SAINT supports productive interactions that immediately serve patient needs. By standardizing the interaction, automating the information exchange, reducing clinician guesswork about what matters, and guiding subsequent clinical responses, this SAINT facilitates effective care and also signals where and how to focus resources. We have shown that this patient-centric approach, when extended to all patients, is associated with improved service quality [34, 35].

Specifically, before an office visit, our SAINT asks patients who are not health confident what they want or need to become more health confident, so that at the time of the office visit, staff will already know what resources are required to meet each patient’s needs. As more and more patients use this SAINT, their aggregated data indicate the sum of resources required to meet most patient needs most of the time. We have used a similar process to identify causes of unnecessary or harmful care [44], and those data guide resource planning away from unproductive or even counterproductive investments. The quality-of-care information produced by our SAINT is also increasingly accepted by those who pay for, certify, and regulate health care [45, 46].

This approach stands in stark contrast to current health resource planning in the USA, where fragmented and inconsistent health care, delivered at multiple points of service, erodes reliability for both the affluent and the poor [47]. While hospitals and practices are major contributors to this problem, service fragmentation extends to community resources as well. For example, a physician member of our research group tallied health care-related contacts for 386 older persons living in a community of 2600 inhabitants and discovered more than 30 stand-alone organizations representing generalist care, specialty care, nursing care, and social services. Without a tool that measures and analyzes patients’ self-reported resource needs, none of these organizations can predict what resources they should have available, let alone coordinate with each other to reduce waste and maximize efficiency.

Heeds the C-monsters: content, confidentiality, control, consent, culture, cost, copyright, coding, and commercialism

During the evolution of our SAINT, feedback from patients, providers, payers, and policymakers pointed to certain hazards that will limit a SAINT’s value, dissemination, and sustainability, and will ultimately result in failure to improve patient health and satisfaction.

First, the SAINT must get the goal right: The SAINT’s content should support service, not just measurement or reimbursement. Unless the patient and clinician see immediate benefit, the SAINT is likely to fail, regardless of its elegant appearance and psychometrics.

From the patient’s perspective, the SAINT must also support confidentiality, control, and consent. Our SAINT assures these with a privacy design that assumes patients expect absolute control over their identifying information, and do not want to be subjected to advertising or conflicts of interest. The European Union recently codified these standards in the General Data Protection Regulation. However, in the USA, a SAINT may be accessible only through a hospital-sponsored portal that links the responses to a medical record, and people are often reluctant to give up personal identifiers just to complete a questionnaire of unknown content and purpose. For these reasons, our SAINT——allows completion before collecting identifiers, and offers patients a personal, portable health plan with no identifiers. To support patients’ cultural needs, we have found that translation into another language and back to the original before dissemination can identify inappropriate interpretations and unsuitable cultural content.

From the clinician’s perspective, a SAINT must avoid the high costs associated with the hazards of copyright enforcement, proprietary coding, and commercialism. For these reasons, we have always made our SAINT freely available and adaptable for research and practice without charge and only request that the copyright source be listed. Proprietary coding often obstructs customization and communication, and commercialization, exemplified by the many hundreds of competing, incompatible electronic medical records in the USA, obstructs co-production of care and resource planning among clinical settings.

Based on observations over decades of experience, Table 2 suggests characteristics that are likely to enhance a SAINT’s value, dissemination, and sustainability … and evade the “C-monsters.”

Table 2 Suggestions for enhancing SAINT value, dissemination, and sustainability

Principles into practice and policy: an illustration for patients with a chronic condition

This section shows how our SAINT leverages the simple measures of health confidence and the WMI to guide care toward what matters to patients and to improve health care reliability by standardizing service. Figure 2 illustrates the wide variation in health confidence levels across hospital service areas (HSAs) in the USA, for 73,338 adult patients with any chronic condition in 608 HSAs, at left, and 4446 adult patients with diabetes in 77 HSAs, at right. The health confidence data were drawn from and matched with geographic HSAs based on ZIP codes aggregated by the Dartmouth Atlas of Health Care [48]. Analyses based on the presence or absence of poverty, pain, and/or emotional problems shift the mean and median of the data in the figure but do not meaningfully lessen the population-level variation in health confidence across these HSAs.

Fig. 2

Percent of patients in US hospital service areas (HSAs) reporting they are very confident that they understand and can manage most of their health problems; data from were matched with HSAs defined by the Dartmouth Atlas of Health Care [48]

Unreliable delivery of health care services has long been recognized as the cause for undesirable variation, for which quality management via standardization is a potent corrective [49]. To reduce variation, practitioners have used our SAINT to proactively ask every patient with low health confidence what would be most likely to help them gain confidence. As an example, Fig. 3 summarizes the relationship between what patients with diabetes said they needed to become more health confident, in relation to their WMI scores. In this example, respondents with higher WMI scores were more likely to identify a need for professional assistance, and less likely to believe that changes in their personal behavior would improve their health confidence. In addition to understanding what matters to each patient now, the practice can plan for future service demands knowing both the distribution of its patients’ WMI scores and the verbatim responses of many respondents.

Fig. 3

Patients with diabetes describe what they need to attain greater health confidence: Based on verbatim responses, exemplified at right, of more than 600 patients collected via since 2017, excluding “don’t know” or uninterpretable responses. The WMI (What Matters Index) is the sum of five patient-reported problems and concerns: a insufficient confidence to self-manage health problems, b pain, c bothersome emotions, d polypharmacy, and e adverse medication effects

Absent a standard report from each patient about what matters, both generalist and specialist clinicians confront the dilemma shown in Table 3. This table summarizes self-reports on a range of topics from more than 10,000 patients with diabetes, as well as these patients’ diagnoses, risks, and service use. The table sorts the patients by their WMI scores, and patients with higher WMI scores can clearly be seen to report far more extensive symptoms, limitations, and concerns, and less engagement in self-management. These patients are also much more likely to be burdened by co-occurring health conditions and to use potentially avoidable costly care.

Table 3 Self-report from diabetic patients illustrating how a What Matters Index guides care and is an expedient proxy for what else might matter

Thus, the conundrum facing any general or specialty care provider: During a brief face-to-face visit, often interrupted for documentation and billing activities, which of the many symptoms, illnesses, functional limitations, and aggravating lifestyle challenges should be the focus?

Historically, it was assumed that health care professionals could rely on clinical judgment to determine the most important focus and prescribe an appropriate treatment. However, clinicians can easily come to different conclusions based on how they interpret information and the contexts in which they work, including whether they provide specialty or generalist care, and in what setting. When deference to professional opinion is the predominant strategy, unreliable and ineffective care is often the result [50].

In recent years, payers and policymakers have promoted algorithm-based prediction instruments that use administrative and medical record data to identify very short lists of patients at risk for costly care, and have then incentivized clinicians to direct more time and services to these patients. A typical algorithm would categorize as high risk for costly care about 10% of the patients in Table 3 based on criteria of an additional diagnosis of serious atherosclerotic cardiovascular disease and a recent hospital admission or emergency department use.

However, predictive analytics and “hotspotting” strategies are proving to be inaccurate, cost-ineffective, and unethical because they direct resources away from the many patients not designated at-risk who are in fact destined to need costly care [38, 50]. The 10% of patients identified by those approaches will have a plethora of the issues listed in Table 3, and faced with this complexity, generalist and specialist clinicians justifiably fall back on their highly variable clinical judgments, often focusing on “sugar control” or other narrowly circumscribed clinical parameters. However, the selected focus is seldom, if ever, the only important challenge and may be subordinate to problems that impose greater burdens on these patients and the health system.

In contrast to the limitations of targeting a few outlier patients at risk for costly care based on old data, our SAINT’s WMI provides a timely, easy-to-interpret, actionable, and reliable foundation for predicting risk and organizing care, both within a practice and throughout a service area, with progress on any of the WMI measures likely to mitigate many associated problems. For example, when patients report that they are not health confident, the software asks them what they believe will be most helpful to improve it, and then sends the verbatim patient response to the clinician as part of a summary of the patient’s responses to every WMI item.

Consider two clinics that provide care to only patients with diabetes. Based on a sample of 30 patients in each clinic who complete the WMI, Clinic A recognizes that 70% of its patients have a WMI of 1 or less, whereas Clinic B learns that 70% have a WMI of two or more. From the information in Table 3, many patients in both clinics will require assistance to become more health confident. However, Clinic B will need to plan more resources to enhance its vigilance for adverse impacts of medications and support for the management of pain and emotional problems. The higher prevalence of patient poverty and social isolation presents an additional resource challenge for Clinic B.

In summary, this illustration calls to mind a useful analogy: that the diagnostic labels we give each patient are merely suitcases containing a jumble of symptoms, associated illnesses, aggravating lifestyle challenges, health-related concerns, functional limitations, and social factors. The WMI provides a standard and expedient handle for a generalist or specialist to move each patient’s suitcase toward the patient’s desired destination. Different diagnostic suitcases can use the same WMI handle.


This report summarizes lessons from three decades of using a standardized assessment, information, and networking technology (SAINT). For patients with chronic conditions, the evidence supports SAINT effectiveness at improving patient health and satisfaction when the technology immediately serves patients and engages them and their clinicians in the co-production of better care.

We have emphasized the advantages of a What Matters Index (WMI) as a parsimonious starting point for almost any SAINT. The WMI has no direct cost and is unambiguous, highly accessible, and strongly correlated with patient-reported quality of life. The WMI has also proved reliable in predicting future costly care for poor and not-poor patients with and without chronic conditions [38], and the reduced variance in interpretation facilitates resource planning and thereby maximizes value and reliability. Thus, generalist and specialist clinicians who use a SAINT that contains the WMI are likely to avoid common obstacles to the co-production of high-quality health care [51].

This report’s focus on the WMI raises a legitimate concern about the inclusion or exclusion of other patient-reported measures or indices derived from a combination of measures. Because of the heterogeneity in patients’ needs, resource availability, and health workers’ responses, a SAINT is unlikely to have the same beneficial impact in all situations. Therefore, an effective SAINT must have a highly adaptable design to add or omit measures when they are needed for specific subgroups of patients or research protocols. This report has described how our SAINT was designed for adaptability, and available evidence suggests that it is likely to be cost-effective for improving health care services and patient outcomes [10]. Our hypothesis is that the SAINT methodology and WMI described herein should be considered standards for comparison to other measures and methods.

In summary, with low and decreasing response rates to traditional survey techniques [52, 53], new tools and business models are needed to assess and deliver what matters to patients. Technologies that evolve to include the characteristics described here will deliver more effective and efficient tools for patients, providers, payers, and policymakers and give patients control over sharing their data with those who need it in real time. The WMI-based SAINT,, provides one broadly applicable and inexpensive strategy that reduces clinician guesswork regarding what matters to patients and facilitates resource planning to improve health care reliability. A medical maxim entreats us: “Listen to the patient; she is telling you the diagnosis.” Here, we add: Listen to a few measures that really matter to most patients; those measures are telling you what to do.

Data availability

Data from were used to illustrate how the principles described in this report can be applied to clinical practice and policy by documenting: (a) variation across hospital service areas (Fig. 2, n = 73,338), (b) the many problems reported by patients with diabetes (Table 3, n = 10,220), and (c) suggestions from patients with diabetes for increasing their health confidence (Fig. 3, n = 603). The data are available from the author upon email request ( No personal patient identifiers are collected or stored by


  1. 1.

    Donabedian, A. (1966). Evaluating the quality of medical care. The Milbank Quarterly,44(3, supplement), 166–206.

    Article  Google Scholar 

  2. 2.

    Ellwood, P. M. (1985). Outcome management: A technology of patient experience. New England Journal of Medicine,318, 1551–1556.

    Google Scholar 

  3. 3.

    Nelson, E., Kirk, J., Bise, B., Chapman, R., Hale, F. A., Stamps, P., et al. (1981). The cooperative information project part 1: A sentinel practice network for service and research in primary care. Journal of Family Practice,13(5), 641–649.

    CAS  PubMed  Google Scholar 

  4. 4.

    Nelson, E. C., Conger, B., Douglass, R., Gephart, D., Kirk, J., Page, R., et al. (1983). Functional health status levels of primary care patients. Journal of the American Medical Association,249(24), 3331–3338.

    CAS  Article  Google Scholar 

  5. 5.

    Nelson, E. C., Landgraf, J. M., Hays, R. D., Wasson, J. H., & Kirk, J. W. (1990). The functional status of patients: How can it be measured in physicians’ offices? Medical Care,28(12), 1111–1126.

    CAS  Article  Google Scholar 

  6. 6.

    WONCA Classification Committee. (1990). Functional status measurement in primary care. In M. Lipkin (Ed.), Frontiers of primary care. New York: Springer.

    Google Scholar 

  7. 7.

    Ware, J. E., & Sherbourne, C. D. (1992). The MOS 36-item short-form health survey (SF-36) I: Conceptual framework and item selection. Medical Care,30, 473–478.

    Article  Google Scholar 

  8. 8.

    Rubenstein, L. V., Calkins, D. R., Young, R. T., Cleary, P. D., Fink, A., Kosecoff, J., et al. (1989). Improving patient function: A randomized trial of functional disability screening. Annals of Internal Medicine,111, 836–842.

    CAS  Article  Google Scholar 

  9. 9.

    Wasson, J. H., Hays, R., Rubenstein, L., Nelson, G., Leaning, D., Johnson, A., et al. (1992). The short-term effect of patient health status assessment in a health maintenance organization. Quality of Life Research,1(2), 99–106.

    CAS  Article  Google Scholar 

  10. 10.

    Wasson, J. H., Stukel, T. A., Weiss, J. E., Hays, R. D., Jette, A. M., & Nelson, E. C. (1999). A randomized trial of using patient self-assessment data to improve community practices. Effective Clinical Practice,2, 1–10.

    CAS  PubMed  Google Scholar 

  11. 11.

    Ahles, T. A., Wasson, J. H., Seville, J. L., Johnson, D. J., Cole, B. F., Hanscom, B., et al. (2006). A controlled trial of methods for managing pain in primary care patients with or without co-occurring psychosocial problems. The Annals of Family Medicine,4(3), 341–350.

    Article  Google Scholar 

  12. 12.

    Nelson, E. C., Eftimovska, E., Lind, C., Hager, A., Wasson, J. H., & Lindblad, S. (2015). Patient reported outcome measures in practice. British Medical Journal.

    Article  PubMed  Google Scholar 

  13. 13.

    Tarlov, A. R., Ware, J. E., Jr., Greenfield, S., Nelson, E. C., Perrin, E., & Zubkoff, M. (1989). The medical outcomes study: An application of methods for monitoring the results of medical care. Journal of the American Medical Association,262(7), 925–930.

    CAS  Article  Google Scholar 

  14. 14.

    Ware, J. E., Jr., Kosinski, M., & Keller, S. D. (1996). A 12-item short-form health survey: Construction of scales and preliminary tests of reliability and validity. Medical Care,34(3), 220–233.

    Article  Google Scholar 

  15. 15.

    Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). Initial adult health item banks and first wave testing of the patient-reported outcomes measurement information system (PROMISTM) network: 2005–2008. Journal of Clinical Epidemiology,63(11), 1179–1194.

    Article  Google Scholar 

  16. 16.

    Agency for Healthcare Research and Quality (2019). CAHPS®: surveys and tools to advance patient-centered care. Retrieved October 1, 2019, from

  17. 17.

    Ho, L., Swartz, A., & Wasson, J. H. (2013). The right tool for the right job: The value of alternative patient experience measures. Journal of Ambulatory Care Management,36(3), 241–244.

    Article  Google Scholar 

  18. 18.

    Wasson, J. H. (2013). A patient-reported spectrum of adverse health care experiences: Harms, unnecessary care, medication illness, and low health confidence. Journal of Ambulatory Care Management,36(3), 245–250.

    Article  Google Scholar 

  19. 19.

    Blumenthal, D., Malphrus, E., & McGinnis, J. M. (Eds.). (2015). Vital signs: core metrics for health and health care progress. Washington, D.C: Committee on Core Metrics for Better Health at Lower Cost, Institute of Medicine.

    Google Scholar 

  20. 20.

    Sinsky, C., Colligan, L., Prgomet, M., Reynolds, S., Goeders, L., Westbrook, J., et al. (2016). Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Annals of Internal Medicine,165(11), 753–760.

    Article  Google Scholar 

  21. 21.

    Wasson, J. H. (2017). A troubled asset relief program for the patient-centered medical home. Journal of Ambulatory Care Management,40(2), 89–100.

    Article  Google Scholar 

  22. 22.

    Wasson, J. H. (2019). Insights from organized crime for disorganized health care. Journal of Ambulatory Care Management,42, 138–146.

    Article  Google Scholar 

  23. 23.

    Bracken, A. C., Hersh, A. L., & Johnson, D. J. (1998). A computerized school-based health assessment with rapid feedback to improve adolescent health. Clinical Pediatrics,37, 677–683.

    CAS  Article  Google Scholar 

  24. 24.

    Wasson, J. H., & James, C. (2001). Implementation of a web-based interaction technology to improve the quality of a city's health care. Journal of Ambulatory Care Management,24, 1–12.

    CAS  Article  Google Scholar 

  25. 25.

    Luce, P., Phillips, J., Benjamin, R., & Wasson, J. H. (2004). Technology for community health alliances. Journal of Ambulatory Care Management,27(4), 366–374.

    Article  Google Scholar 

  26. 26.

    McGregor, M. J., Slater, J., Sloan, J., McGrail, K. M., Martin-Matthews, A., Berg, S., et al. (2017). How’s your health at home: Frail homebound patients reported health experience and outcomes. Canadian Journal on Aging.

    Article  PubMed  Google Scholar 

  27. 27.

    Lepore, M., Wild, D., Gil, H., Lattimer, C., Harrison, J., Woddor, N., et al. (2013). Two useful tools to improve patient engagement and transition from the hospital. Journal of Ambulatory Care Management,36(4), 338–344.

    Article  Google Scholar 

  28. 28.

    Wasson, J. H., & Bartels, S. (2009). CARE vital signs supports patient-centered collaborative care. Journal of Ambulatory Care Management,32, 56–71.

    Article  Google Scholar 

  29. 29.

    Hibbard, J. H., Stockard, J., Mahoney, E. R., & Tusler, M. (2004). Development of the patient activation measure (PAM): Conceptualizing and measuring activation in patients and consumers. Health Services Research,39, 1005–1026.

    Article  Google Scholar 

  30. 30.

    Wasson, J. H., & Coleman, E. A. (2014). Health confidence: A simple, essential measure for patient engagement and better practice. Family Practice Management,21(5), 8–12.

    PubMed  Google Scholar 

  31. 31.

    Wasson, J. H., Benjamin, R., Johnson, D., Moore, L. G., & Mackenzie, T. (2011). Patients use the internet to enter the medical home. Journal of Ambulatory Care Management,34, 38–46.

    Article  Google Scholar 

  32. 32.

    Hibbard, J. H., Greene, J., Sacks, R., Overton, V., & Parrotta, C. D. (2016). Adding a measure of patient self-management capability to risk assessment can improve prediction of high costs. Health Affairs,35(3), 489–494.

    Article  Google Scholar 

  33. 33.

    Mattingly, J. T., & Nong, K. (2019). Implementing a health confidence tool at time of discharge. The Patient-Patient-Centered Outcomes Research.

    Article  PubMed  Google Scholar 

  34. 34.

    Ho, L., Haresch, J. W., Nunlist, M. M., Schwarz, A., & Wasson, J. H. (2013). Improvement of patients’ health confidence: A comparison of 15 primary care practices and a national sample. Journal of Ambulatory Care Management,36(3), 235–240.

    Article  Google Scholar 

  35. 35.

    Nunlist, M. M., Blumberg, J., Uiterwyk, S., & Apgar, T. (2016). Using health confidence to improve patient outcomes. Family Practice Management,23(6), 21–24.

    PubMed  Google Scholar 

  36. 36.

    Wasson, J. H., Johnson, D. J., & Mackenzie, T. (2008). The impact of primary care patients’ pain and emotional problems on their confidence with self-management. Journal of Ambulatory Care Management,31, 120–127.

    Article  Google Scholar 

  37. 37.

    Wasson, J. H., Soloway, L., Moore, L. G., Labrec, P., & Ho, L. (2017). Development of a care guidance index based on what matters to patients. Quality of Life Research.

    Article  PubMed  Google Scholar 

  38. 38.

    Wasson, J. H., Ho, L., Soloway, L., & Moore, L. G. (2018). Validation of the What Matters Index: A brief, patient-reported index that guides care for chronic conditions and can substitute for computer-generated risk models. Public Library of Science ONE,13(2), e0192475.

    PubMed  Google Scholar 

  39. 39.

    Wasson, J. H. (2019). A brief review of single-item and multi-item quality of life measures for Medicare patients. Journal of Ambulatory Care Management,42, 21–26.

    Article  Google Scholar 

  40. 40.

    Improving Chronic Illness Care (2019). The chronic care model. Retrieved October 1, 2019, from

  41. 41.

    Bodenheimer, T., Wagner, E. H., & Grumbach, K. (2002). Improving primary care for patients with chronic illness. Journal of the American Medical Association,288(14), 1775–1779.

    Article  Google Scholar 

  42. 42.

    Bodenheimer, T., Wagner, E. H., & Grumbach, K. (2002). Improving primary care for patients with chronic illness: The chronic care model, part 2. Journal of the American Medical Association,288(15), 1909–1914.

    Article  Google Scholar 

  43. 43.

    Wasson, J. H., Godfrey, M. M., Nelson, E. C., Mohr, J. J., & Batalden, P. B. (2003). Microsystems in health care part 4: Planning patient-centered care. Joint Commission Journal on Quality and Safety,29(5), 227–237.

    Article  Google Scholar 

  44. 44.

    Wasson, J. H., Mackenzie, T. A., & Hall, M. (2007). Patients use an internet technology to report when things go wrong. Quality and Safety in Health Care,16, 213–217.

    Article  Google Scholar 

  45. 45.

    Ho, L., & Antonucci, J. (2015). The dissenter’s viewpoint: There has to be a better way to measure a medical home. Annals of Family Medicine.

    Article  PubMed  PubMed Central  Google Scholar 

  46. 46.

    Ho, L., & Antonucci, J. (2017). Using patient-entered data to supercharge self-management. Annals of Family Medicine.

    Article  PubMed  PubMed Central  Google Scholar 

  47. 47.

    Wasson, J. H. (2008). Who is in charge? even affluent patients suffer consequences of fragmented care. Journal of Ambulatory Care Management,31, 35–36.

    Article  Google Scholar 

  48. 48.

    The Dartmouth Atlas Project (2020). Resource document. Retrieved March 25, 2020, from

  49. 49.

    Berwick, D. M. (1991). Controlling variation in health care. Medical Care,29, 1212–1225.

    CAS  Article  Google Scholar 

  50. 50.

    Finkelstein, A., Zho, A., Taubman, S., & Doyle, J. (2020). Health care hotspotting—a randomized, controlled trial. New England Journal of Medicine.

    Article  PubMed  Google Scholar 

  51. 51.

    Hibbard, J., & Lorig, K. (2012). The dos and donts of patient engagement in busy office practices. Journal of Ambulatory Care Management,35(2), 129–132.

    Article  PubMed  Google Scholar 

  52. 52.

    Roberts, B. W., Yao, J., Tizeciak, C. J., Bezich, L. S., Mazzarelli, A., & Tizeciak, S. (2020). Income disparities and nonresponse bias in surveys of patient experience. Journal of General Internal Medicine.

    Article  PubMed  Google Scholar 

  53. 53.

    Salzburg, C.A., Kahn, C.N., Foster, N.E., Demehin, A.A., Guinan, M.A., Ramsey, P., et al. (2019). Modernizing the HCAHPS survey. Retrieved March 24, 2020, from

Download references


Past and present members of the Dartmouth, Northern New England Practice-Based Research Network and Ideal Microsystem Practices, Eugene Nelson, Mark Nunlist, Lynn Ho, Jean Antonucci, L. Gordon Moore, Dale Gephart, Deborah Johnson, Kimberly Nunlist, Lisa Wasson, and Lloyd Kvam.

Author information



Corresponding author

Correspondence to John H. Wasson.

Ethics declarations

Conflict of interest

The author declares no conflicts of interest. Under license with the Trustees of Dartmouth College, the author develops and freely distributes and related websites for research and clinical practice. When academic or clinical users modify the material, the Trustees of Dartmouth College request that the source be attributed. Commercial users are required to obtain a license.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wasson, J.H. Standardized assessment, information, and networking technologies (SAINTs): lessons from three decades of development and testing. Qual Life Res (2020).

Download citation


  • Patient engagement
  • Risk assessment
  • Health confidence
  • What Matters Index
  • Guided healthcare