Evaluating the perceived benefits of CACREP accreditation

Thomas R. Scofield
University of Wisconsin Oshkosh

David D. Hof
University of Nebraska at Kearney

Abstract

Survey results of community counseling programs accredited by the Council for Accreditation of Counseling and Related Educational Programs (CACREP) were analyzed across four groups (enrolled students, program graduates, faculty, and administrators) with regard to what respondents saw as benefits of accreditation and how these benefits were measured. Although respondents overwhelmingly endorsed perceived benefits of accreditation, they reported no systematic method in place to measure such benefits. Findings are discussed with regard to how respondents believed benefits could be measured, suggesting an identified need to develop a systematic model to measure benefits of accreditation.

Evaluating the perceived benefits of CACREP accreditation

Questions continue to arise as to whether the perceived benefits of CACREP are worth the expenditure of time and money to seek accreditation. That is to say, to what extent do the programs' faculty, currently enrolled students, as well as graduates and administration, benefit by adhering to the standards of preparation espoused by CACREP? There have been any number of literature reviews exploring problems and costs associated with accreditation (e.g., Zimpfer, Mohdzain, West & Bubenzer, 1992), just as there have been others (e.g., Wittmer, 1988; Bahen & Miller, 1998; Vacc, 1992) that have highlighted the benefits of such an undertaking. Whatever the position taken, debate continues as to the relative costs versus benefits of CACREP accreditation.

In conjunction with such contrasting views comes a new impetus and imperative to examine the reasons for seeking and attaining accreditation. According to Vacc (1992), what remains fundamental to establishing a credible relevancy criterion for evaluating programs for accreditation purposes is the need for empirical data associated with such efforts. At question then, is whether the process itself can demonstrate an established systematic and credible method that measures purported benefits. These questions and others related to accreditation have not been adequately addressed and continue to call for evaluation (“CACREP Connection,” 1999-2000). Despite the volume of literature on the topic of accreditation, very little has been done to evaluate how programs measure stated benefits.

Perhaps the broadest undertaking to secure information regarding the perceived benefits of CACREP accreditation was conducted by Cecil, Havens, Moracco, Scott, Spooner, and Vaughn (1987). Albeit dated, this study paid particular attention to groups of respondents representing programs with accreditation intention, accredited programs and programs with no accreditation intentions. While the thrust of their study was to seek information on the status of the accreditation movement, it also solicited information regarding the benefits of accreditation. Respondents in that study were asked to endorse those content items they believed to be benefits of accreditation (Cecil et al., 1987, p. 176). This list was derived from the working knowledge and/or beliefs of those authors given their expertise with the subject and what they felt were representative benefits of accreditation. However, then as now, the literature offers no definitive data regarding whether cited benefits of accreditation are indeed measured systematically or established through opinion only. Thus, the direction of the present study is two-fold. First, to learn if faculty, currently enrolled students, program graduates and administrators currently endorse earlier perceived benefits of accreditation and secondarily how such benefits, if endorsed, are in fact measured.

Method

Population

The population for the current study included individuals (i.e., enrolled students, program graduates, faculty, administrators) from the 106 nationally-identified CACREP accredited Community Counseling programs (Hollis, 1997). To maximize potential data collection, all accredited programs were mailed surveys. The survey was piloted to determine that instructions, questions and procedures were clear and explicit as well as to correct obvious flaws (Gall, Gall & Borg, 2003). While the total number of nationally accredited community counseling programs stood at 106, data received from piloted surveys at the 106 th institution was not included, reducing the total number of programs receiving surveys to 105. The total number of programs responding was 26 (25% of CACREP accredited programs).

Of the 110 (35%) currently enrolled students responding, 65 identified themselves as Master's level students and 33 as doctoral. The average age for this group was 35.2. The group was comprised of 18 males and 92 females. Of those responding, 77 identified themselves as White and 9 as Persons of Color other than White.

Of the 40 (13%) program graduates responding, 29 identified themselves as Master's level graduates and 13 as doctoral level graduates. The average for this group age was 37.8. The group was comprised of 12 males and 28 females. Of those responding, 29 identified themselves as White and 1 as a Person of Color other than White.

Of the 101(32%) faculty members responding, 30 identified themselves as assistant professors, 35 as associate professors, 34 as full professors, and 2 as teaching in non-tenured track positions. The average age of this group was 48.5. The group was comprised of 48 males and 53 females. Of those responding, 69 identified themselves as White and 12 as Persons of Color other than White.

Finally, of the 63 (20%) administrators responding, 11 identified themselves as program coordinators, 26 identified themselves as department chairs, 15 identified themselves as deans, 3 identified themselves as vice presidents and 8 individuals fell into the category of other. The average age for this group was 52.7. The group was comprised of 31 males and 32 females. Of those responding, 43 identified themselves as being White and 7 as Persons of Color other than White.

Instrument

A list, based on the benefits generated in the Cecil, et al. (1987) study, was used by the authors to solicit feedback regarding the perceived benefits of CACREP accreditation from the perspectives of currently enrolled students, program graduates, faculty and administrators.

Selecting purposefully (Gall, et al., 2003) from the list of benefits offered by Cecil et al. (1987) the authors of the present study were able to collect data on what benefits these four groups of respondents would now endorse, having sought and attained accreditation. The list of benefits and accompanying questions for the current research were selected to generate information on what benefits would currently garner endorsement only and not used for comparison to the Cecil et al. (1987) study.

Each question was rated by the respondent on a 5-point Likert scale (1 = strongly agree ; 5 = strongly disagree ). It should be noted that an additional item (e.g., 6 = no response ), having no weight, was used to capture the frequency of individuals not responding to the question.

In addition, for each survey response, the respondent was asked to provide qualitative information, “Would you briefly describe how this (question item) was measured?” Each respondent was also asked, “What are your ideas for how it (question item) could be measured?”

Reliability was achieved through the use of a multi-rater card sort system. Four individual raters, two doctoral level and two Master's level individuals, trained in counseling, interviewing and qualitative analysis skills, noted emerging themes from their independent review of the survey questions (currently enrolled students = 5; program graduates = 5; faculty = 16; and, administrators = 7 [total of 33 questions]) completed by all respondents and placed these themes onto note cards. A card sort process (see Analysis) was utilized to reduce themes for each question. For the purposes of establishing inter-rater reliability it was determined, a priori, that in order to classify an item as a theme, three out of the four independent raters would necessarily have listed the theme during the card sort process.

Procedure

For all 105 accredited programs that were sent surveys, four separate groups with three respondents per group (i.e., 3 currently enrolled students, 3 Program Graduates, 3 Faculty, and 3 Administrators) were asked to complete surveys. The respective program Chair of each surveyed department received a research packet, with cover letter, describing the scope of the research and procedures for disseminating the packet of surveys. The program Chair of that department was asked to distribute the survey to three individuals from each of the following groups: currently enrolled students, program graduates, faculty members, and administrators to complete the survey. The program Chair was asked to include him/herself as one of the three individuals identified for the group representing administrators. It was the authors' intent to have the questionnaires distributed to a central person within each department. Because not all CACREP liaisons were known at the time of the study it was thought that the Chair of each department would be the most accessible for the initial contact. Respondents were asked to complete and return the survey in the enclosed self-stamped envelope provided.

Utilizing best practices in data collection procedures and non-respondent follow-up methods, programs not initially responding were provided a second mailing (Gall, et al., 2003). It should be noted that even though the authors provided a second mailing, which according to Worthen & Brezezinski is the best form of follow-up (as cited in Gall, et al., 2003), as well as a third contact in the form of a phone call in an effort to gain as high a return rate as possible, a lower than expected program return rate was obtained. The authors believe, however, that the number of respondents within in each group of those programs participating (Currently Enrolled Students N = 110; Graduates N = 40; Faculty Members N = 101; Administrators N = 63) totaling 304 participants, provides a representative sample from accredited programs.

Analysis

Quantitative analysis was completed for the Likert ratings of questions for each of the four groups. Because questions were different for each of the four groups, a comparison of means between groups could not be generated. Means and standard deviations were derived utilizing the SPSS 10.0 computer application system (see Table 1).

Qualitative themes emerged (using a multiple independent rater system) from the question “Would you briefly describe how this (question item) was measured?” as well as “What are your ideas for how it (question item) could be measured?” Four individual raters noted emerging themes from their independent review of the 33 questions (110 currently enrolled students responding to 5 questions [total of 550 responses]; 40 program graduates responding to 5 questions [total of 200 responses]; 101 faculty responding to 16 questions [total of 1616 responses]; and 63 administrators responding to 7 questions [total of 441 responses]) and placed these themes onto note cards. The following instructions were given to the four independent raters prior to their reviewing any data. The procedure for how the researchers were to evaluate and distill the qualitative data is as follows:

The first step in this process is to recognize what you are looking at. There are four separate groups of respondents who were asked to participate in the research project. Within each of these groups, were a number of prepared questions that were asked (for example, 5 questions for currently enrolled students; 5 questions for program graduates; 16 questions for faculty; and 7 questions for administrators, for a total of 33 questions). You will recall that in addition to the request for quantitative data, two additional qualitative questions were also asked (i.e., "How was this measured?” and “How could this be measured?”). For each of these questions you are to note, through your independent judgment, any emerging themes. Begin with the first list of responses (i.e., currently enrolled students) under the question, "How was this measured?” Read through the list of responses once in their entirety. After becoming familiar with the responses, start reducing the information by combining similar items. Continue this process until all similar items have been combined. The number of themes may vary depending on the quantity of similar or dissimilar responses. Your identified themes are to then be transferred to 3x5 note cards. Label each note card as to the question number and as to which of the two qualitative questions you are reviewing. Continue this process until you have reviewed all 33 questions. Then, as a group, all the independent raters will meet; each rater will share the themes that emerged from their reduction of the data. For an item to constitute a theme, it must appear no fewer than three times when you are independently reducing the data. For the purposes of this study a minimum of three raters, out of the four, would be required to agree on any one item for it to be classified as a theme.

Results

The individual survey questions with means and standard deviations are presented in Table 1.

Table 1

Benefits Associated with CACREP Accreditation

   
Benefit Rating
______________
 
N
Mean
SD

Current Students

 

 

 

Contributes to a stronger professional identity and commitment to the Counseling profession

96

1.64

.756

4) Increases marketability

94

1.66

.727

•  Increases student pride in and satisfaction with program

97

1.73

.901

•  Increases and/or improves opportunities foryour internship and field practicum placements

96

2.32

1.165

•  Increases the level of student involvement in the program, research, and/or profession

96

2.34

1.012

Graduate Students

 

 

 

10) Contributed to a stronger professional identity and commitment to the Counseling profession

40

1.55

.597

9) Increased the opportunities for placement following graduation

40

2.07

1.023

8) Increased and/or improved opportunities for internship and field practicum placements

39

2.13

.923

7) Increased the level of student involvement in your program, research, and/or profession

39

2.23

1.012

Increased contacts and interaction with field professionals

38

2.24

1.025

 

 

 

 

Faculty

 

 

 

20) Enhanced professional status of program

96

1.64

.682

18) Enhanced licensure and certification opportunities for graduates

96

1.72

.970

11) Improved program quality

96

1.77

.876

26) Contributed to and strengthened the counseling profession

92

1.80

.788

21) Contributed to faculty pride in and satisfaction with the Community Counseling program

95

1.99

.962

22) Assisted program to compete with other CACREP accredited programs

89

2.03

.898

25) Improved/increased administrative support

93

2.06

.949

15) Improved the academic quality of students while in the program

95

2.12

.849

16) Improved the academic quality of applicants considering or students entering the program

94

2.27

.964

12) Improved continuity and sequencing of course content

98

2.29

.919

17) Assisted program to increase or maintain student enrollment

92

2.41

.951

24) Promoted a deepened sense of professional commitment on the part of faculty

96

2.50

1.066

23) Assisted program to compete with others that hold accreditation other than CACREP

85

2.54

1.160

14) Increased level of student involvement inprogram, research, and/or profession

96

2.55

.993

13) Increased program and professional involvement among and between faculty

98

2.63

.957

19) Increased and/or improved opportunities for internship and field practicum placements

94

2.73

.941

 

 

 

 

Administration

 

 

 

27) Enhanced the status of the program on this campus

65

1.55

.751

29) Made the program more attractive for recruiting purposes

64

1.62

.900

33) Improved administrative support

64

2.17

1.001

28) Assisted program to compete with other CACREP accredited programs

59

2.14

1.090

31) Improved the academic quality of students while in the program

58

2.40

1.042

32) Assisted program to increase or maintain student enrollment

64

2.41

.988

30) Improved the academic quality of applicants considering or students entering the program

62

2.54

1.017

Note. Likert Scale ranged from 1 (strongly agree) to 5 (strongly disagree).

The reader will note that although question numbers are out of sequence (i.e., 1, 2, …5) this is done to order the means from lowest (strongly agree) to highest (strongly disagree) in each group of respondents. Grand averages from Table 1 were derived and are as follows: Currently Enrolled Students N = 110, responding to 5 questions, had a mean of 1.94 and a standard deviation of .91; Graduates N = 40, responding to 5 questions had a mean of 2.04 and a standard deviation of .91; Faculty Members N = 101, responding 16 questions had a mean of 2.22 and a standard deviation of .93; and, Administrators N = 63 responding to 7 questions had a mean of 2.12 and a standard deviation of .91.

Qualitative questions were used to solicit more detail as to how each of the four groups felt benefits were measured or how they could be measured within their respective programs. The following table illustrates qualitative themes related to proposed methods of benefit measurement from each of the four subgroups.

Table 2

Qualitative Themes Related to Proposed Methods of How Benefits Could be Measured


Perspective How Measured How Could Be Measured

Students “by standards” “by survey”
  “by personal opinion” “by personal opinion”
Graduates “by opinion/self-report” “by comparing CACREP vs. non-CACREP programs”
  “by emphasis on accreditation” “by survey of graduates”
    “by survey of placement sites”
Faculty Members “by opinion” “by performance”
  “by performance on NCE” “by performance on NCE”
  “by faculty feedback” “by survey of faculty”
    “by survey of graduates”
    “by survey of administrators”
    “by survey of site supervisors and employers”
Administrators “by number of program applicants” “by comparing the program pre- and post-accreditation (GRE, GPA, enrollment Numbers)”
  “by applicant inquiry about program accreditation” “by survey of students”
    “by survey of faculty”

 

Additionally, the authors wanted to evaluate if commonalties existed among the four groups. Thus, the responses of the four groups were collapsed, using the same card sort process as previously described. The overall themes that emerged indicated that the groups felt benefits were measured through opinion, self-report or that accreditation had no change or impact. Again, for the qualitative questions soliciting information for how an item could be measured, overall responses indicated by opinion/self-report, through survey (i.e., graduates, employers/site supervisors, students, faculty, administration), program comparison (i.e., CACREP vs. non-CACREP, pre- and post- CACREP accreditation) as well as indicating that no change or that no impact had occurred.

Discussion

While the results suggest respondents do feel that accreditation is beneficial, they also felt there were no systematic methods in place to measure these perceived benefits. When indicating how benefits could be measured, respondents overwhelmingly indicated that survey methods would provide the most appropriate method to measure the benefits of accreditation. In addition, respondents suggested that employers and site supervisors should be surveyed as well. This latter aspect has only recently been assessed at the state level (Lukasiewicz, White, Scofield, & Hof, 2003), with results indicating a greater need for dissemination of information to employers and site supervisors regarding the benefits of accreditation.

A theme of program comparison also emerged that would contrast CACREP with non-CACREP programs to measure appreciable benefits. However, such an undertaking would require considerable coordination between and among programs, as well as CACREP, to secure this information. It was also noted that accrued benefits could be determined from comparisons between pre- to post- CACREP accreditation processes as mentioned above. Again, although noting how specific information might be gathered, no systematic method to measure these pre- to post- changes was in place. Although the findings appear disparate, the authors believed it plausible that those programs not having a systematic method in place to measure the benefits of accreditation, would be those programs most likely not able to discern changes (i.e., pre- to post- accreditation) and thus would, by default, report little change or impact.

The distillation of data, across all four groups of respondents, identifies the absence of, and therefore an existing need for, a more systematic method of data collection regarding the benefits of CACREP accreditation. Again, soliciting information utilizing surveys was found to be the most desirable method to elicit information. What remains to be developed, however, are empirically-driven methods for data collection that will provide greater credibility to the benefits of CACREP accreditation.

Conclusion

The purpose of this study was to learn if faculty, currently enrolled students, program graduates and administrators currently endorsed earlier perceived benefits of accreditation and secondarily how such benefits, if endorsed, were in fact measured.

Although respondents overwhelmingly endorsed benefits of accreditation the current study found that there was no systematic method in place to measure such benefits. Clearly, the authors believe the results of the present study point to the need for a systematic model that will aid in empirically measuring benefits of accreditation.

Readers must bear in mind that the list of questions developed for this study were derived and adapted from the earlier work of Cecil, et al., (1987). Moreover, the reader must note that the list of benefits of the present study were not meant to be exhaustive but only representative of those benefits that might pertain to the respective perceptions of currently enrolled students, program graduates, faculty and administrators. And, though one could argue that benefits used in the present study may have been reasonable benchmarks 20 years ago, the findings of the current study, in the form of emergent themes, support these earlier endorsed benefits.

If future efforts are to lend greater clarity to the existing research, efforts must move to promote empirical validation of accreditation benefits through the development of appropriate models of data collection.


References

Bahen, S. C., & Miller, K. E. (1998 ) . CACREP Accreditation: A case study. Journal of Humanistic Education and Development, 37, 117-127.

Cecil, J. H., Havens, R., Moracco, J. C., Scott, N. A., Spooner, S. E., & Vaughn, C. (1987). CACREP Accreditation Intention. Counselor Education and Supervision, 27, 174-183.

Gall, M. D., Gall, J. P. & Borg, W. R., (2003). Educational research: An introduction (7th ed.). White Plains, NY: Longman Publishers USA.

Hollis, J. W. (1997). Counselor preparation 1996 – 1998: Programs, faculty, trends (9th ed.).

Bristol, PA: Taylor & Francis and Greensboro, NC: National Board for Certified Counselors.

Lukasiewicz, S., White, C., Scofield, T., & Hof , D. (2003). CACREP Accreditation: Does accreditation enhance employment potential for counseling graduates? Nebraska Counselor, 34, 3-11. Request for proposals (1999-2000, Winter). CACREP Connection, 13-16.

Vacc, N. A. (1992). An assessment of the perceived relevance of the CACREP standards. Journal of Counseling and Development, 70, 685-687.

Wittmer, J. (1988). Professional identity and accreditation . Counselor Education and Supervision, 27, 291-297.

Zimpfer, D. G., Mohdzain, A. Z., West, J. D., & Bubenzer, D. L. (1992). Professional identification of counselor preparation programs. Counselor Education and Supervision, 32, 91-107.