Community Mental Health Journal

, Volume 46, Issue 4, pp 319–329 | Cite as

Integrating Assertive Community Treatment and Illness Management and Recovery for Consumers with Severe Mental Illness

  • Michelle P. Salyers
  • Alan B. McGuire
  • Angela L. Rollins
  • Gary R. Bond
  • Kim T. Mueser
  • Veronica R. Macy
Original Paper


This study examined the integration of two evidence-based practices for adults with severe mental illness: Assertive community treatment (ACT) and illness management and recovery (IMR) with peer specialists as IMR practitioners. Two of four ACT teams were randomly assigned to implement IMR. Over 2 years, the ACT–IMR teams achieved moderate fidelity to the IMR model, but low penetration rates: 47 (25.7%) consumers participated in any IMR sessions and 7 (3.8%) completed the program during the study period. Overall, there were no differences in consumer outcomes at the ACT team level; however, consumers exposed to IMR showed reduced hospital use over time.


Illness management and recovery Assertive community treatment Fidelity Mental illness Evidence-based practice 

In light of national policies encouraging the implementation of evidence-based practices that focus on recovery from mental illness (President’s New Freedom Commission on Mental Health 2003), we sought to integrate two evidence-based practices, assertive community treatment (ACT) and illness management and recovery (IMR). One of the most rigorously supported evidence-based approaches for adults with severe mental illness (SMI) is assertive community treatment (ACT; Bond et al. 2001; Phillips et al. 2001). ACT addresses the fragmentation of the mental health system by delivering a comprehensive range of services from a single team. ACT is also designed to address the intensive treatment needs of consumers with SMI who have not engaged in traditional, office-based treatment, such as consumers who are repeatedly hospitalized. Over 40 years of research indicates ACT is effective at reducing hospitalization rates, stabilizing housing, and facilitating treatment retention, while having less consistent effects on employment, criminal justice involvement, and quality of life outcomes (Bond et al. 2001).

Although ACT enjoys a strong track record for improving some important outcomes, critics have voiced concerns that the ACT model is coercive and paternalistic (Ahern and Fisher 2001; Gomory 2001). The few studies that have actually systematically examined consumer complaints with ACT have suggested that these criticisms may have been overstated (McGrew et al. 2002). However, a common perception within the mental health field is that the ACT model—as currently practiced—has often failed to live up to the promise of engendering empowerment and self-determination of consumers. Thus, even though the basic philosophy of ACT is consistent with consumer empowerment and recovery (i.e., keeping consumers in the community and out of the hospital), ACT may not always be practiced in a way that embraces recovery (Salyers and Tsemberis 2007).

One potential approach for addressing the needs for a recovery-oriented service system is to enhance ACT services by implementing IMR, a structured program for helping consumers learn effective ways to manage their illnesses and pursue their recovery goals (Gingerich and Mueser 2005). IMR embraces the principle of self-determination and is based on the value that services should be consumer-directed and provide the means necessary for consumers to make informed choices. The IMR model posits that by empowering people with self-management tools, new horizons are opened and mental health services become a true partnership between consumers and providers. IMR was created as a part of the National Implementing Evidence-Based Practices Project (Drake et al. 2001) by developing a package based on a systematic review of research on effective strategies for teaching illness self-management to people with SMI (e.g., psychoeducation, optimizing the use of medications, coping skills training, relapse prevention techniques) (Mueser et al. 2002). Recent evaluations of the IMR program have shown promising results (Mueser et al. 2006; Salyers et al. 2009a, b), including a randomized trial which found superior illness self-management outcomes for IMR participants compared to participants in treatment as usual (Hasson-Ohayon et al. 2007).

The purpose of the current study was to enhance ACT services by implementing IMR. We believed IMR would be best integrated onto ACT teams by the work of a peer specialist (i.e., a consumer of mental health services who was doing well in their own recovery). Although randomized controlled trials have not shown that services given by consumer providers are significantly more effective than services given by non-consumer providers (Rivera et al. 2007; Solomon 2004), their inclusion on treatment teams is recommended (President’s New Freedom Commission on Mental Health 2003) and many state mental health authorities encourage the hiring of consumers to work in mental health treatment programs (Mowbray et al. 1997). However, the hiring of consumers has often been based more on “tokenism” than genuine collaboration, with consumer providers not truly integrated as staff members and treated as equals on treatment teams (Basto et al. 2000). Because one of the barriers to full integration has been the absence of well-defined roles for consumers (Solomon 2004), we believed that teaching IMR could provide a valuable role. Engaging consumers in the process of identifying personal recovery goals and teaching illness self-management skills in the IMR program could provide an important structural role for peer providers; trained peers successful in managing their own recovery could be ideal providers of IMR services. Further, findings from a small pilot study suggested that a peer providing IMR on an ACT team was seen by staff and consumers as a role model and inspiration (Salyers et al. 2009b).

For this study, we compared two ACT teams with IMR (ACT–IMR teams) to two ACT teams with no IMR (ACT-only). As an implementation check, we expected that ACT–IMR teams would show greater fidelity to IMR principles at 1 and 2 years follow-up compared to ACT-only teams, but that the teams would not differ in ACT fidelity. We also expected that ACT–IMR teams would have greater rates of consumers receiving IMR. We hypothesized that ACT–IMR teams would have better consumer outcomes than ACT-only teams. Within ACT–IMR teams, we hypothesized that consumers who received IMR would have improved outcomes as well.


Overview of Research Design

The sample included four high fidelity ACT teams, two of which were randomly assigned to the ACT–IMR condition and two assigned to the ACT-only control condition. ACT–IMR teams were augmented with a part-time peer specialist, paired with 1–2 other clinicians to provide IMR. The IMR specialists on these teams also received intensive training and monitoring from IMR consultant and trainers. Trainers included a doctoral level psychologist and a consumer provider, both trained by IMR toolkit authors (Mueser and Gingerich 2002). Contacts included a 2-day intensive skills training, annual 1-day refresher trainings, monthly conference calls, semi-annual fidelity assessments with written recommendations for program improvement, and additional phone calls or site visits to address implementation and training issues, as needed. The consumer trainer also held additional conference calls and face to face meetings each quarter specifically to support peer specialists in their roles. ACT-only teams were exposed to IMR principles at the beginning of the project, but did not have designated peer specialists and did not participate in any of the IMR training during the project.

Team level fidelity to IMR was assessed every 6 months throughout the study. Objective indicators of community integration (e.g., housing status, hospitalization) were collected quarterly throughout the project. Ratings of illness self-management, hope, and satisfaction with services were assessed at baseline (prior to IMR training), and 1 and 2 years later.

Study Sites

The study was conducted at four community mental health centers (CMHCs) in Indiana chosen in a competitive Request for Proposal process from among 11 state-certified ACT teams at that time. We hosted an informational workshop on IMR on February 25, 2004 with representatives from the four ACT teams attending. Randomization took place the following day.

Program Models

Assertive Community Treatment

Assertive community treatment programs were required to follow a set of standards set forth by the Indiana Division of Mental Health and Addiction (DMHA) ( to maintain certification and funding as an ACT team. Indiana ACT standards mandate that teams must consist of a master’s level team leader and other particular specialists (e.g., at least one registered nurse, a vocational specialist), and teams must provide particular services (e.g., symptom management, crisis assessment and intervention). All consumers continued to receive ACT services; the two conditions differed in the extent to which they received additional IMR services.

Illness Management and Recovery

Illness management and recovery is a curriculum-based approach to helping consumers set and achieve personal recovery goals and acquire the knowledge and skills to manage their illnesses independently (Gingerich and Mueser 2005). The IMR program incorporates five main types of evidence-based techniques for improving illness self-management: psychoeducation, cognitive-behavioral approaches to medication adherence, relapse prevention, social skills training (e.g., to enhance social support), coping skills training (e.g., for persistent symptoms). Treatment sessions are provided individually or in small groups and typically last about an hour. These sessions are usually held weekly over a ten-month period and cover ten modules, each taught sequentially. Each module includes an educational handout that summarizes the main points of the topic and includes checklists and worksheets to enhance learning. Material is taught using a combination of educational techniques, motivational interventions, and cognitive-behavioral techniques to help consumers manage their illness.

Integration of ACT and IMR

We speculated that IMR would be best integrated onto an existing ACT team by designating specialists on the team, in a fashion similar to other specialty roles on ACT teams (Allness and Knoedler 2003). IMR specialists would function like other specialists on the ACT team (i.e., substance abuse specialist, vocational specialist), working directly with interested consumers to learn and apply the IMR materials and teaching the other ACT team members about the model so that the entire team could support the intervention. The grant funded a half-time peer specialist position for each of the two ACT–IMR teams. As part of the agreement to participate in the study, each ACT–IMR team agreed to identify at least one clinician to receive the IMR training and to serve alongside the peer specialist as an IMR practitioner.

Center A hired a part-time peer specialist in August 2004 and had one other master’s level clinician trained in IMR. However, the clinician lost interest in being an IMR specialist so the team leader, also a master’s level clinician, became trained and started providing IMR. In June 2005, the original peer specialist left her position. A second part-time peer specialist was hired in August 2005 and continued to work as an ACT team member and IMR specialist past the end of the study. Center B hired one part-time peer specialist and trained two additional clinicians (one PhD level psychologist and one master’s level therapist) to be the IMR specialists for the team. The original peer specialist was hired in July 2004 and continued to be a member of the team past the end of the study. Near the end of the study, the master’s level clinician transferred to a different program within the agency and no longer provided IMR services.

Sampling Considerations

The sample included all consumers in the four ACT programs (regardless of whether they received IMR in the case of the ACT–IMR programs). ACT teams are designed to serve consumers with SMI who have the most severe disabilities. The state defines those with SMI as adults who have a primary mental illness for at least 12 months that results in significant functional impairment. Further, ACT certification requires that 80% of ACT consumers must have a DSM-IV diagnosis on Axis I of 295–296 (schizophrenia, bipolar disorder, and other major mood disorders). At the time of this study, the state had no uniform rules for admission criteria across ACT sites. However, most ACT programs based their admission criteria on at least one of the following areas: extended or frequent hospitalizations or use of emergency services, persistent symptoms, co-occurring substance use, criminal justice involvement, homelessness, or lack of benefit from traditional office-based service.


Program Fidelity

Indiana ACT teams are regularly assessed for fidelity to the ACT model using the Dartmouth ACT Scale (DACTS; (Teague et al. 1998). Each of 28 items is rated on a five-point behaviorally anchored scale, ranging from 1 = not implemented to 5 = fully implemented. The total score is the mean of all 28 items and therefore also ranges from 1 to 5. DACTS ratings were completed by ACT consultant/trainers during their regularly scheduled visits to the programs based on observation of the team meeting, interviews with staff and consumers, and chart reviews. The DACTS was tested in 50 programs and discriminated significantly between each of the four types of case management (Teague et al. 1998). The scale has been found to be sensitive to change over time in implementation efforts (McHugo et al. 2007). A precursor to the DACTS predicted consumer outcomes (McHugo et al. 1999), and the DACTS itself has been associated with lower hospitalization rates among teams (Bond and Salyers 2004). Inter-rater reliability of the DACTS was found to be .99 in the National Implementing Evidence-based Practices Project (McHugo et al. 2007). Each of the four programs had a DACTS completed annually throughout the project.

The primary tool to assess the degree of implementation of IMR was the IMR Fidelity Scale (Mueser et al. 2002). Each of 13 items is rated on a five-point behaviorally anchored scale, ranging from 1 = not implemented to 5 = fully implemented. As with the DACTS, the total score is the mean of the individual items, with the total score ranging from 1 to 5. Fidelity assessments were made by two trained raters during a day-long visit with each program. For the ACT–IMR programs, an IMR trainer was paired with a research assistant. During an IMR fidelity visit, the raters reviewed charts and conducted brief interviews with staff and consumers. At the end of the day, the raters independently assessed the program and compared ratings. Discrepancies were resolved through discussion and additional data gathered if needed. IMR fidelity was completed on all four teams every 6 months. The IMR Fidelity Scale has had less psychometric validation than the DACTS. However, research has shown the scale to have high inter-rater reliability (intraclass correlation coefficient = .97 (McHugo et al. 2007) and sensitivity to both implementation efforts showing change over time (McHugo et al. 2007; Salyers et al. 2009a) and consumer outcomes (Hasson-Ohayon et al. 2007).

Objective Indicators of Community Integration

Objective indicators were assessed with the Consumer Outcomes Monitoring Package (COMP; (Press et al. 2003). This software is currently being used by Indiana ACT teams to gather data on hospitalizations, current living arrangements, substance use stage of treatment, incarcerations, and competitive employment. ACT programs enter data monthly and submit the data quarterly as part of ongoing program monitoring.

Self-Report Measures

Illness self-management was assessed through the IMR Scales (Salyers et al. 2007). Parallel forms of the scale are completed by the consumer and staff. The IMR Scales both have 15 items rated on a five-point behaviorally anchored scale and include items such as progress toward goals, knowledge about mental illness, symptom distress, and coping. The mean of all the items is used as the total score (ranging from 1 to 5). IMR Scales have shown adequate internal consistency (Cronbach’s alpha ≥.71), strong test–retest correlations over a 2 week period (both versions, r = .81, P < .001), and were correlated with other indices of functioning, symptoms, and recovery (Salyers et al. 2007).

Hope was assessed using the six-item Adult State Hope Scale (Snyder et al. 1996). The items are rated on a scale from “Definitely False” to “Definitely True.” The mean of all the items is used, with a total score ranging from 1 to 5. A series of studies demonstrated internal consistency (median Cronbach’s alpha = .93), high levels of convergent and discriminant validity, and sensitivity (Snyder et al. 1996). Although the Adult State Hope scale was psychometrically tested in college students and community samples, the scale has also been shown to be appropriate for use in individuals with SMI (Dickerson 2002; McGrew et al. 2005).

The satisfaction with services (SWS) scale is an 11-item consumer satisfaction checklist adapted from the Consumer Satisfaction Questionnaire (Larsen et al. 1979). The scale uses the mean of all items as the total score and ranges from 1 to 3. The SWS was designed specifically to be used with ACT consumers and has been used in several large-scale ACT studies and has demonstrated internal consistency (Cronbach’s alpha = .90) (Bond and DeGraaf-Kaser 1990; Bond et al. 1991; McGrew et al. 1995).


IMR fidelity visits were conducted every 6 months. At the baseline, 12, and 24 month visits, research staff also brought surveys with self-addressed, stamped envelopes to be directly returned to the research team. Clinician-rated IMR scales for each ACT consumer were completed soon after the visit and returned to research staff. A $5 bill was attached to each consumer survey to incentivize survey completion; consumer surveys contained the Client-rated IMR Scale, Hope, and Satisfaction. ACT staff members invited consumers to participate, explained consent, and distributed the packets of questionnaires, but were not present when the consumer filled out the survey. The study and its procedures were approved by the Institutional Review Board at Indiana University Purdue University Indianapolis. The lead author and one co-author are co-owners of Target Solutions, LLC, which provides consultation and training in illness management and recovery. The other authors report no competing interests.

Data Analysis

We examined fidelity scores descriptively, reporting mean scores for each of the programs over time. We examined differences in demographics by site and by condition, using analysis of variance (ANOVA) for continuous variables and chi-square tests for categorical variables. In order to test the hypothesis that ACT–IMR programs would have better client outcomes over time, we conducted repeated measures ANOVAs for each of the dependent variables. Finally, within the two ACT–IMR teams, we looked at changes in outcomes for consumers who received at least some IMR to see if their post-IMR scores were significantly improved. We conducted a paired samples t-test for each of the continuous dependent variables and McNemar’s test for changes in dichotomous variables (e.g., employed versus not employed). Because days hospitalized were skewed a logarithmic transformation was performed for analyses using this variable.


Sample Description

Because we had data from several sources (computerized outcomes submitted quarterly and annual consumer and staff surveys) and because teams admitted and discharged consumers during the study, there was variability in the number of consumers available at each time period. Overall, we received some data for 324 consumers: Center A N = 55; Center B N = 128; Center C N = 71; Center D N = 70, or 183 for the ACT–IMR condition and 141 for the Control condition. Background characteristics for consumers by site and for experimental versus control conditions are shown in Table 1. The two conditions differed significantly on race and education, with the ACT–IMR teams having a greater proportion of consumers who were not Caucasian and a greater proportion of consumers with a higher level of educational attainment.
Table 1

Background characteristics


M (SD)/n (%)




Center A (n = 55)

Center B (n = 128)

Total (n = 183)

Center C (n = 71)

Center D (n = 70)

Total (n = 141)


40.8 (9.8)

41.9 (12.2)

41.6 (11.5)

42.1 (12.3)

44.1 (9.0)

43.0 (10.9)

t(290) = −1.11


35 (70.0%)

66 (58.9%)

101 (62.3%)

38 (54.3%)

37 (59.7%)

75 (56.8%)

χ2(1) = .93



χ2(2) = 10.40**

 African American

0 (0.0%)

21 (16.4%)

21 (11.5%)

3 (4.2%)

3 (4.3%)

6 (4.3%)



46 (83.6%)

81 (63.3%)

127 (69.4%)

63 (88.7%)

56 (80.0%)

119 (84.4%)



9 (16.4%)

26 (20.3%)

35 (19.1%)

5 (7.0%)

11 (15.7%)

16 (11.3%)


Marital status


χ2(3) = 3.61

 Single, never married

24 (55.8%)

53 (58.9%)

77 (57.9%)

31 (54.4%)

26 (46.4%)

57 (50.4%)


 Single, living w/partner

0 (0.0%)

5 (5.6%)

5 (3.8%)

4 (7.0%)

2 (3.6%)

6 (5.3%)



3 (7.0%)

6 (6.7%)

9 (6.8%)

7 (12.3%)

8 (14.3%)

15 (13.3%)


 Div., Wid., Sep.

16 (37.2%)

26 (28.9%)

42 (31.6%)

15 (26.3%)

20 (35.7%)

35 (31.0%)




χ2(3) = 15.80***


13 (40.6%)

14 (28.6%)

27 (33.3%)

34 (54.8%)

22 (48.9%)

56 (52.3%)



8 (25.0%)

21 (42.9%)

29 (35.8%)

22 (35.5%)

19 (42.2%)

41 (38.3%)


 Some college

4 (12.5%)

10 (20.4%)

14 (17.3%)

5 (8.1%)

2 (4.4%)

7 (6.5%)



7 (21.9%)

4 (8.2%)

11 (13.6%)

1 (1.6%)

2 (4.4%)

3 (2.8%)




χ2(2) = .12

 Affective D/o

11 (25.6%)

19 (19.2%)

30 (21.1%)

9 (14.1%)

18 (31.6%)

27 (22.3%)


 Psychotic D/o

22 (51.2%)

77 (77.8%)

99 (69.7%)

49 (76.6%)

33 (57.9%

82 (67.8%)



10 (23.3%)

3 (3.0%)

13 (9.2%)

6 (9.4%)

6 (10.5%)

12 (9.9%)


Notes: n’s vary by variable due to missing data

aACT–IMR total versus control total

P < .05; ** P < .01; *** P < .001

Implementation Check: Fidelity to IMR and ACT

We expected that the two conditions would not differ in ACT fidelity and that the ACT–IMR teams would show greater fidelity to IMR principles at 1 and 2 years follow-up. All four ACT teams maintained good fidelity throughout the study period with DACTS scores >4.0. At baseline, the Control ACT sites had slightly higher scores (mean = 4.6) than the ACT–IMR sites (mean = 4.1). This may have been due to Control sites having been in operation a year before the ACT–IMR sites. At the end of the study, the Control sites’ DACTS scores had decreased slightly (mean = 4.3), while ACT–IMR sites maintained scores (mean = 4.1).

As shown in Fig. 1, programs were similar in baseline IMR fidelity with initial scores ranging from 1.4 to 2.1 on a five point scale. None of these programs would be considered faithful implementations of IMR at baseline. As expected, over time the ACT–IMR sites showed improved fidelity to the IMR model. By 12 months, the two ACT–IMR teams had reached high IMR fidelity, with scores over 4.0 that were maintained or improved over time. Control ACT teams continued to have low IMR fidelity. One site (Center C) did show a marked increase at 24 months from 1.9 to 2.4, possibly due to starting another evidence-based practice, integrated dual disorders treatment, which shares a few common elements with IMR (notably motivational interviewing, involvement of significant others, and cognitive-behavioral techniques). However, this score is still far below the general standard of 4.0 indicating adherence to IMR principles.
Fig. 1

IMR fidelity across 2-year

Implementation Check: Penetration of IMR

We expected that the ACT–IMR teams would have greater penetration rates (i.e., more consumers who actually received the IMR intervention). No control participants received IMR services. By the end of the study period, 28 (21.9%) participants at Center B received at least some IMR services; 7 had graduated (i.e., completed the entire IMR package), 10 were currently enrolled, 5 were enrolled but not currently active in IMR, and 6 had dropped out. At Center A, 19 (34.5%) of the participants had received at least some IMR: 12 were active, 6 had dropped out, and one participant had requested a “break” from IMR. No clients at Center A had completed the entire IMR curriculum. Although the ACT–IMR programs had higher penetration rates than controls, rates were still quite low. Overall, 47 (25.7%) had some IMR exposure and only 7 (3.8%) completed the IMR program during the study period.

Comparison of ACT–IMR and Control Programs on Outcomes

As shown in Table 2, we examined differences between groups over time to test the hypothesis that ACT–IMR teams would show greater improvements over time. This hypothesis was not supported. Only one variable, substance abuse, showed greater improvement for the ACT–IMR teams over time (significant interaction of time and condition). Of the other community integration indices, consumers in the ACT–IMR program were more likely to live in independent housing (from the beginning) and levels of independent housing increased across time for both conditions; however, the gap decreased across time as a larger proportion of consumers at the Control sites moved into independent housing. Also, consumers in the ACT–IMR programs were more likely to be homeless at each time period.
Table 2

Consumer outcomes across 2 years


N (%) of yes responses/M (SD)


Baseline/FY 04

1 Year/FY 05

2 Year/FY 06

ACT–IMR (N = 49)

Control (N = 73)

ACT–IMR (N = 49)

Control (N = 73)

ACT–IMR (N = 49)

Control (N = 73)

Main effect time

Main effect condition



6 (12.2%)

7 (9.6%)

10 (20.4%)

8 (11.0%)

7 (14.3%)

10 (13.7%)

F(2, 240) = .96

F(1, 120) = .78

F(2, 240) = .89

Days employed

5.2 (18.1)

15.8 (65.9)

14.7 (42.4)

17.0 (64.8)

12.5 (45.4)

21.4 (69.8)

F(2, 240) = 1.77

F(1, 120) = 0.56

F(2, 240) = 0.73


2 (4.1%)

3 (4.1%)

4 (8.2%)

2 (2.7%)

2 (4.1%)

2 (2.7%)

F(2, 240) = .38

F(1, 120) = .84

F(2, 240) = .71


2 (4.1%)

0 (0.0%)

3 (6.1%)

0 (0.0%)

2 (4.1%)

0 (0.0%)

F(2, 240) = .29

F(1, 120) = 7.18**

F(2, 240) = .29


16 (32.7%)

20 (27.4%)

15 (30.6%)

21 (28.8%)

18 (36.7%)

15 (20.5%)

F(2, 240) = 0.04

F(1, 123) = 1.78

F(2, 240) = 1.03

Days hospitalized

20.2 (71.4)

10.1 (42.4)

16.9 (48.6)

18.4 (65.0)

16.4 (55.3)

9.0 (46.5)

F(2, 240) = 0.43

F(1, 120) = 0.43

F(2, 240) = 0.66

Independent housing

36 (78.3%)

40 (55.6%)

38 (82.6%)

40 (55.6%)

37 (80.4%)

58 (80.6%)

F(2, 232) = 7.07***

F(1, 116) = 5.47*

F(2, 232) = 7.07***

Substance abuse treatment scale (SATS)

5.3 (2.4)

5.1 (2.0)

5.4 (2.5)

4.9 (2.0)

6.1 (2.6)

4.6 (2.1)

F(2, 130) = .22

F(1, 65) = 2.09

F(2, 130) = 4.67**

n = 31

n = 36

n = 31

n = 36

n = 31

n = 36

Notes: SATS score and living arrangement are for the last quarter of each fiscal year. SATS scores do not include participants rated as never having had a substance use problem. a Not calculated because of the small number of people in these categories

P < .05; ** P < .01; *** P < .001

As shown in Table 3, we examined the hypothesis that consumers in ACT–IMR programs would have greater improvements in illness self-management, hope, and satisfaction over time. Because of the small number of consumers who had data available at all three time periods, the results should be interpreted with caution; however, the hypotheses were not supported. Consumers across the two conditions did not improve on these measures over time and the conditions did not differ in general, or in their rate of change over time. The only exception was that consumers at the ACT–IMR sites were less satisfied overall than consumers at the Control sites. There was also one significant time effect, with clinician-ratings of client illness self-management showing improvements over time. However, this did not differ by condition.
Table 3

Consumer and clinician survey results across time


M (SD)


Baseline/FY 04

1 Year/FY 05

2 Year/FY 06







Main effect (time)

Main effect (condition)

Time × condition

Consumer-rated illness self management

3.6 (.5)

3.5 (.7)

3.3 (.8)

3.6 (.7)

3.7 (.5)

3.6 (.6)

F(2, 82) = 1.36

F(1, 41) = .04

F(2, 82) = 1.15

n = 12

n = 31

n = 12

n = 31

n = 12

n = 31



3.1 (.5)

3.2 (.7)

2.9 (.8)

3.0 (.8)

3.2 (.6)

3.0 (.7)

F(2, 78) = 1.20

F(1, 39) = .02

F(2, 78) = .83

n = 10

n = 31

n = 10

n = 31

n = 10

n = 31



2.6 (.4)

2.7 (.3)

2.4 (.4)

2.7 (.3)

2.5 (.3)

2.6 (.4)

F(2, 86) = 1.50

F(1, 43) = 4.10*

F(2, 86) = .45

n = 13

n = 32

n = 13

n = 32

n = 13

n = 32


Clinician-rated illness self management

3.0 (.5)

3.1 (.6)

3.3 (.4)

3.3 (.6)

3.2 (.5)

3.2 (.6)

F(2, 188) = 7.42***

F(1, 94) = .28

F(2, 188) = .27

n = 46

n = 50

n = 46

n = 50

n = 46

n = 50


Notes: Scores on all scales except satisfaction range from 1 to 5, with higher numbers indicating more positive outcomes. Satisfaction ranges from 1 to 3, with higher numbers indicating greater satisfaction

P < .05; ** P < .01; *** P < .001

Changes Over Time for Those Who Received IMR

We hypothesized that consumers who participated in IMR would have improved outcomes over time. As shown in Table 4, there was a significant reduction in hospital admissions and the number of participants hospitalized from the year prior to starting IMR to the year after IMR. There was a significant reduction in logarithmic transformed days hospitalized. None of the other variables showed significant change over time.
Table 4

Outcomes 12 months before and after IMR





M (SD)/n (%)

M (SD)/n (%)


14 (35.0%)

4 (10.0%)

McNemar P = .01*


1.0 (1.9)

0.2 (0.9)

t(39) = 2.22, P = .03*

Days hospitalizeda

10.9 (43.3)

2.4 (9.3)

t(39) = 1.50, P = .14

Days hos (log)a

0.4 (0.6)

0.1 (0.4)

t(39) = 2.60, P = .01*


3 (7.5%)

2 (5.0%)

McNemar P = 1.00


0.2 (1.1)

0.3 (1.3)

t(39) = −.43, P = .67

Days incarcerateda

1.7 (10.3)

3.7 (23.2)

t(39) = −1.00, P = .32


1 (2.5%)

0 (0.0%)

McNemar P = 1.00

Homeless periodsa

0.2 (1.5)

0.0 (0.0)

t(39) = 1.0, P = .32

Days homelessa

7.1 (45.1)

0.0 (0.0)

t(39) = 1.0, P = .32


5 (12.5%)

3 (7.5%)

McNemar P = .63

Weeks employeda

12.0 (58.0)

11.9 (58.8)

t(39) = .06, P = .95

SATS (highest)

6.0 (2.2)

6.4 (2.2)

t(36) = −1.2, P = .26

Independent living

36 (90.0%)

39 (97.5%)

McNemar P = .25


P < .05


In this study, we found that ACT programs were able to achieve high fidelity scores (for both IMR and ACT). Similar to programs in the National Implementing Evidence-based Practices Project, our programs took about a year to achieve stable IMR fidelity, but had achieved higher scores than sites in the national project (McHugo et al. 2007). This indicates that IMR specialists (both peer and clinician) can deliver structured, curriculum-based interventions to consumers with the most severe disabilities in a treatment setting that emphasizes assertive, community outreach.

Although fidelity scores are one indicator of the quality of implementation, they do not incorporate information about the extent of access and engagement of consumers in the program (i.e., penetration), which was strikingly low. Some IMR specialists found devoting time to IMR was difficult to manage and preferred a mix of duties, thus diluting their concentration on IMR provision. IMR specialists also indicated that case management and crises were often more pressing to consumers and infringed on time set aside for IMR. Although these concerns were also voiced by other specialists on ACT teams, these issues appeared to be magnified in peers who were new to clinical practice and filling a new role on the team. An additional consideration is that the ACT–IMR programs did not establish specific expectations for the IMR specialists regarding the number of consumers who should participate in IMR over the course of the project. Previous work on IMR implementation has highlighted the need for setting clear expectations within IMR clinician job descriptions and other agency structures to support the practice, particularly within case management programs (Salyers et al. 2009a). We recommended that new IMR specialists start with 2–3 consumers and then gradually increase that number as they learned the modules and developed confidence working with IMR. IMR specialists, however, did not tend to increase their caseloads of consumers receiving IMR over time.

Another issue affecting IMR penetration was staff turnover. In Center A the initial peer specialist left the team, and the first clinician stopped providing IMR, resulting in the need for two new IMR practitioners for this team. In Center B, one IMR trained clinician left the team. The challenge, therefore, is to find ways to increase consistent consumer access to IMR. Once IMR specialists are trained, adequate penetration requires accountability for providing IMR, incorporating expectations into job descriptions, and performance evaluations. Other strategies devised by our current teams include expanding the number of trained IMR practitioners and assisting specialists with time management so that IMR services are not “lost” in daily routine or crises. Other programs have offered IMR in group formats (Hasson-Ohayon et al. 2007; Mueser et al. 2006), which could increase penetration levels but may pose special challenges for ACT teams. In addition, the length of time to complete the IMR program, about 10 months, places a practical limit on the number of consumers one IMR specialist can work with in a 2 years period. We had two IMR specialists per team, and the number of consumers participating in IMR did grow over time, but not at a substantial enough rate to impact the broader team level outcomes. Whitley and colleagues (2009) also highlight the need for leadership at a variety of levels, including the agency director level to facilitate IMR implementation.

This study was unique in targeting peer specialists for delivering IMR services. We hypothesized that the use of peers in this role was important in that peers could serve as strong role models and sources of hope as they taught illness self-management skills. We also hypothesized that IMR could provide a structured, valued role for peer specialists to help overcome some prior drawbacks of nominal positions on ACT teams (Basto et al. 2000; Carlson et al. 2001). Integration took some work, though. Because of the part-time hours of the peer specialists, they initially did not attend as many team meetings. We worked with the teams to help adjust their schedules so that they could attend at least half of the morning meetings to better integrate their services with the rest of the team. In addition, consumer peer specialists may need more support outside the team. At their request, we set up a monthly phone call and a quarterly face-to-face meeting with other peer specialists, so that they could support each other and discuss their unique roles on ACT teams. Not only were peer specialists learning new skills of implementing IMR, they also were learning how to fit into an ACT team and balance their experiences as a consumer with being a provider of services.

Both centers took approximately 6 months to recruit and hire the peer specialist position. In Indiana, ACT teams have had difficulty filling other specialist positions such as nurses and substance abuse specialists. Therefore, the hiring delay may not represent something specific to hiring consumer peer specialists, but rather a systemic difficulty in obtaining specialist clinicians. Even so, hiring peer specialists was a new process for teams, and their agencies had to work through policies such as how to advertise the position, what questions they could ask during interviews, whether to hire people who had received services directly from their agency, how to determine work hours, how to grant access to consumer records, and how to provide training in human service provision and confidentiality issues. Programs also had to develop a peer specialist job description that was not previously available.

We found that continuous learning was critical for implementation, including follow-up consultation, supervision, and monitoring. This appeared particularly true for the peer specialists in this study. Although they had good experiences from their own recovery process to guide them, they needed more time than the clinicians to learn the motivational and cognitive-behavioral techniques incorporated into the IMR program. This is not specific to peer specialists per se, but a function of type and level of graduate training to have been exposed to these clinical skills. We offered additional training in motivational interviewing, cognitive-behavioral techniques, and clinical consultation for IMR. In addition, once the team leaders began meeting weekly with the peer specialists, the peers began to develop more confidence in their abilities. In addition, through our fidelity visits, we could help monitor how well programs were actually implementing the model and provide on-site feedback and suggestions for improvement.

Outcomes at the program level were understandably negligible. As we noted above, because of the low penetration rates, it would be difficult to show an effect of IMR at the team level. The finding that consumers who participated in IMR had reduced hospital use was reassuring, but receipt of IMR was not randomly assigned within team, and outcomes may be due to other factors such as selection (e.g., IMR clinicians may have selected clients who were less likely to need hospitalization in the near future). Also, ACT services alone have a strong history of reducing hospitalizations (Bond et al. 2001), making it difficult to attribute these pre-post changes to IMR.

There were several limitations to this study. Perhaps the most notable was the experimental manipulation that confounded the concurrent addition of a consumer provider and the implementation of a new program. In addition, the ACT teams, though randomly assigned to implement IMR, differed from each other in important ways before the addition of a peer specialist and IMR (ACT fidelity and length of time practicing ACT and some baseline consumer outcomes such as homelessness). Although we had COMP data for most of the participants, the response rates were low for the surveys, and the differential response rates across time made it difficult to analyze the data over time.

Despite limitations and the modest consumer outcomes in this study, we have seen several changes in practice directly related to this project. Both ACT–IMR teams maintained the position of the consumer peer specialist, even after the grant funding for their positions ended, and both ACT–IMR teams continue to provide IMR to consumers. Notably, Indiana has had a large expansion of consumer peer specialists; from only one at our initial pilot site prior to this grant (Salyers et al. 2009b), to now 13 consumer peer specialists providing IMR across the state. Based on our experiences in this project and a related project (Salyers et al. 2009a), we recommend that ACT teams seeking to implement IMR identify at least two clinician (peer or non-peer) to be trained in the practice and expected to be dedicated to IMR work as their specialty, much like a substance abuse specialist or supported employment specialist. Team leaders and other administrators should provide clear expectations (e.g., penetration/IMR productivity standards) and support (e.g., training in advanced clinical techniques, quality supervision) for the IMR specialist role on the ACT team. As with the integration of any evidence-based practice to ACT, the team’s maturity level and experience with the additional model should be taken into account. The addition of new practices to a new ACT team may take more time for start-up in order to acquire some familiarity with the model and how it fits with ACT daily functions. For teams that choose to implement with peer specialists, additional supports for the peer specialist and guidance for the supervisor may be required, particularly if an agency has never hired peers previously. Other agencies in Indiana have now accumulated anecdotal positive implementation experiences with using IMR peer specialists on ACT teams (Salyers et al. 2009b).



This study was funded by a grant from the National Institute of Disability and Rehabilitation Research (H133G030106). We appreciate the participation of Park Center in Fort Wayne, Northeastern Center in Kendallville, Four County Counseling Center in Logansport, and Community Mental Health Center in Lawrenceburg, Indiana. We also appreciate the assistance of Molly Gilreath in the preparation of tables and reports related to this project.


  1. Ahern, L., & Fisher, D. (2001). Recovery at your own PACE (personal assistance in community existence). Journal of Psychosocial Nursing & Mental Health Services, 39(4), 22–32.Google Scholar
  2. Allness, D. J., & Knoedler, W. H. (2003). The PACT model of community-based treatment for persons with severe and persistent mental illness: A manual for PACT start-up. Arlington, VA: NAMI.Google Scholar
  3. Basto, P. M., Pratt, C. W., Gill, K. J., & Barrett, N. M. (2000). The organizational assimilation of consumer providers: A quantitative examination. Psychiatric Rehabilitation Skills, 4(1), 105–119.Google Scholar
  4. Bond, G. R., & DeGraaf-Kaser, R. (1990). Group approaches for persons with severe mental illness. Social Work with Groups, 13(1), 21–36.CrossRefGoogle Scholar
  5. Bond, G. R., Drake, R. E., Mueser, K. T., & Latimer, E. (2001). Assertive community treatment for people with severe mental illness: Critical ingredients and impact on patients. Disease Management & Health Outcomes, 9, 141–159.CrossRefGoogle Scholar
  6. Bond, G. R., McDonel, E. C., Miller, L. D., & Pensec, M. (1991). Assertive community treatment and reference groups: An evaluation of their effectiveness for young adults with serious mental illness and substance abuse problems Special Issue: Serving persons with dual disorders of mental illness and substance use. Psychosocial Rehabilitation Journal, 15(2), 31–43.Google Scholar
  7. Bond, G. R., & Salyers, M. P. (2004). Prediction of outcome from the Dartmouth assertive community treatment fidelity scale. CNS Spectrums, 9(12), 937–942.PubMedGoogle Scholar
  8. Carlson, L. S., Rapp, C. A., & McDiarmid, D. (2001). Hiring consumer-providers: Barriers and alternative solutions. Community Mental Health Journal, 37(3), 199–213.CrossRefPubMedGoogle Scholar
  9. Dickerson, R. J. (2002). Hope and self-esteem as outcome measures of a psychiatric inpatient cognitive-behavioral treatment program. Dissertation Abstracts International: Section B: The Sciences & Engineering, 63(6-B), 3004.Google Scholar
  10. Drake, R. E., Goldman, H. H., Leff, H. S., Lehman, A. F., Dixon, L., Mueser, K. T., et al. (2001). Implementing evidence-based practices in routine mental health service settings. Psychiatric Services, 52(2), 179–182.CrossRefPubMedGoogle Scholar
  11. Gingerich, S., & Mueser, K. T. (2005). Illness management and recovery. In R. E. Drake, M. R. Merrens, & D. W. Lynde (Eds.), Evidence-Based Mental Health Practice: A Textbook (pp. 395–424). New York: Norton.Google Scholar
  12. Gomory, T. (2001). A critique of the effectiveness of assertive community treatment. Psychiatric Services, 52, 1394.CrossRefPubMedGoogle Scholar
  13. Hasson-Ohayon, I., Roe, D., & Kravetz, S. (2007). A randomized controlled trial of the effectiveness of the illness management and recovery program. Psychiatric Services, 58(11), 1461–1466.CrossRefPubMedGoogle Scholar
  14. Larsen, D. L., Attkisson, C. C., Hargreaves, W. A., & Nguyen, T. D. (1979). Assessment of client/patient satisfaction: Development of a general scale. Evaluation and Program Planning, 2, 197–207.CrossRefPubMedGoogle Scholar
  15. McGrew, J. H., Bond, G. R., Dietzen, L. L., McKasson, M., & Miller, L. D. (1995). A multi-site study of client outcomes in assertive community treatment. Psychiatric Services, 46, 696–701.PubMedGoogle Scholar
  16. McGrew, J. H., Johannesen, J. K., Griss, M. E., Born, D., & Katuin, C. (2005). Performance-based funding of supported employment: A multi-site controlled trial. Journal of Vocational Rehabilitation, 23, 81–99.Google Scholar
  17. McGrew, J. H., Wilson, R., & Bond, G. R. (2002). An exploratory study of what clients like least about assertive community treatment. Psychiatric Services, 53, 761–763.CrossRefPubMedGoogle Scholar
  18. McHugo, G. J., Drake, R. E., Teague, G. B., & Xie, H. (1999). The relationship between model fidelity and client outcomes in the New Hampshire Dual Disorders Study. Psychiatric Services, 50, 818–824.PubMedGoogle Scholar
  19. McHugo, G. J., Drake, R. E., Whitley, R., Bond, G. R., Campbell, K., Rapp, C. A., et al. (2007). Fidelity outcomes in the national implementing evidence-based practices project. Psychiatric Services, 58(10), 1279–1284.CrossRefPubMedGoogle Scholar
  20. Mowbray, C. T., Moxley, D. P., Jasper, C. A., & Howell, L. L. (1997). Consumers as providers in psychiatric rehabilitation. Columbia, MD: IAPSRS Publications.Google Scholar
  21. Mueser, K. T., Corrigan, P. W., Hilton, D. W., Tanzman, B., Schaub, A., Gingerich, S., et al. (2002a). Illness management and recovery: A review of the research. Psychiatric Services, 53, 1272–1284.CrossRefPubMedGoogle Scholar
  22. Mueser, K. T., & Gingerich, S. (Eds.). (2002). Illness Management and Recovery Implementation Resource Kit. Rockville, MD: Center for Mental Health Services, Substance Abuse and Mental Health Services Administration.Google Scholar
  23. Mueser, K. T., Gingerich, S., Bond, G. R., Campbell, K., & Williams, J. (2002b). Illness Management and Recovery Fidelity Scale. In K. T. Mueser & S. Gingerich (Eds.), Illness Management and Recovery Implementation Resource Kit. Rockville, MD: Center for Mental Health Services, Substance Abuse and Mental Health Services Administration.Google Scholar
  24. Mueser, K. T., Meyer, P. S., Penn, D. L., Clancy, R., Clancy, D. M., & Salyers, M. P. (2006). The illness management and recovery program: Rationale, development, and preliminary findings. Schizophrenia Bulletin, 32(1), 32–43.CrossRefGoogle Scholar
  25. Phillips, S. D., Burns, B. J., Edgar, E. R., Mueser, K. T., Linkins, K. W., Rosenheck, R. A., et al. (2001). Moving assertive community treatment into standard practice. Psychiatric Services, 52(6), 771–779.CrossRefPubMedGoogle Scholar
  26. President’s New Freedom Commission on Mental Health. (2003). Achieving the promise: Transforming mental health care in America. Final Report. DHHS Pub. No. SMA-03-3832. Rockville, MD: Substance Abuse and Mental Health Services Administration.Google Scholar
  27. Press, A., Marty, D., & Rapp, C. (2003). Consumer Outcomes Monitoring Package. Lawrence, KS: The University of Kansas School of Social Welfare.Google Scholar
  28. Rivera, J. J., Sullivan, A. M., & Valenti, S. S. (2007). Adding consumer-providers to intensive case management: Does it improve outcome? Psychiatric Services, 58(6), 802–809.CrossRefPubMedGoogle Scholar
  29. Salyers, M. P., Godfrey, J. L., McGuire, A. B., Gearhart, T., Rollins, A. L., & Boyle, C. (2009a). Implementing the illness management and recovery program for consumers with severe mental illness. Psychiatric Services, 60(4), 483–490.CrossRefPubMedGoogle Scholar
  30. Salyers, M. P., Godfrey, J. L., Mueser, K. T., & Labriola, S. (2007). Measuring illness management outcomes: A psychometric study of clinician and consumer rating scales for illness self management and recovery. Community Mental Health Journal, 43(5), 459–480.CrossRefPubMedGoogle Scholar
  31. Salyers, M. P., Hicks, L. J., McGuire, A. B., Baumgardner, H., Ring, K., & Kim, H. W. (2009b). A pilot to enhance the recovery orientation of assertive community treatment through peer provided illness management and recovery. American Journal of Psychiatric Rehabilitation, 12(3), 191–204.CrossRefGoogle Scholar
  32. Salyers, M. P., & Tsemberis, S. (2007). ACT and recovery: integrating evidence-based practice and recovery orientation on assertive community treatment teams. Community Mental Health Journal, 43(6), 619–641.CrossRefPubMedGoogle Scholar
  33. Snyder, C. R., Sympson, S. C., Ybasco, F. C., Borders, T. F., Babyak, M. A., & Higgins, R. L. (1996). Development and validation of the State Hope Scale. Journal of Personality and Social Psychology, 70(2), 321–335.CrossRefPubMedGoogle Scholar
  34. Solomon, P. (2004). Peer support/peer provided services underlying processes, benefits, and critical ingredients. Psychiatric Rehabilitation Journal, 27(4), 392–401.CrossRefPubMedGoogle Scholar
  35. Teague, G. B., Bond, G. R., & Drake, R. E. (1998). Program fidelity in assertive community treatment: Development and use of a measure. American Journal of Orthopsychiatry, 68(2), 216–232.CrossRefPubMedGoogle Scholar
  36. Whitley, R., Gingerich, S., Lutz, W. J., & Mueser, K. T. (2009). Implementing the illness management and recovery program in community mental health settings: Facilitators and barriers. Psychiatric Services, 60, 202–209.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Michelle P. Salyers
    • 1
    • 2
    • 3
    • 4
  • Alan B. McGuire
    • 1
    • 2
    • 3
  • Angela L. Rollins
    • 2
    • 3
  • Gary R. Bond
    • 5
    • 6
  • Kim T. Mueser
    • 6
    • 7
  • Veronica R. Macy
    • 8
  1. 1.VA HSR&D Center on Implementing Evidence-Based Practice, Roudebush VA Medical CenterIndianapolisUSA
  2. 2.ACT Center of IndianaIndianapolisUSA
  3. 3.Department of PsychologyIndiana University Purdue University Indianapolis (IUPUI)IndianapolisUSA
  4. 4.Regenstrief Institute, Inc.IndianapolisUSA
  5. 5.Department of PsychiatryDartmouth Medical SchoolHanoverUSA
  6. 6.Dartmouth Psychiatric Research CenterLebanonUSA
  7. 7.Departments of Psychiatry and Community and Family MedicineDartmouth Medical SchoolHanoverUSA
  8. 8.Recovery Network UnlimitedIndianapolisUSA

Personalised recommendations