Facilitating Accountability Data Collection for Use in Counseling Effectiveness Assessment


Performance accountability data is required for college counseling professionals to show program effectiveness. Collecting timely, useable data for program assessment presents special challenges because of ethical and privacy issues. The purpose of this study was to examine factors that might facilitate data collection through classroom research. Thirty-two (32) faculty members participated in the study. Useable data was collected from 1,200 university students. A discriminant model was found which significantly increased the researchers’ ability to identify faculty who would participate in a college counseling classroom research activity (83.56%).


Facilitating Accountability Data Collection for Use

in Counseling Effectiveness Assessment

College counseling programs across the United States are being targeted for budget cuts that range from shrinking operational budgets to total elimination of programs (Hayes, 2002). Because many state budgets are struggling with decreasing revenues and increasing needs, budget cuts are being passed on to state agencies. Higher education is a discretionary category which is subject to budget cuts in many states. As budget cuts are passed on to the university, institutions must decide which programs to fund and which to cut. Counselors are being asked to provide accountability data that documents their program effectiveness. Schmidt (2003) defines the purpose of accountability data as “the ability to show what services are being offered and the difference these services make in the lives of people” (132). One challenge for counselors is to gather valid accountability data in a timely manner for the results to be useful in program policy decisions.

College counseling programs include a variety of counseling areas. Hayes (2002) describes college counseling as the various types of counseling that a university provides for its students. This includes academic advising, personal counseling, crisis intervention, financial aid counseling, and career counseling. Traditionally, research in this area has focused on 1) input, 2) environment, or 3) output (Astin, 1993; Komives Woodard, & Associates, 1996). Input is what the student brings to the university, environment is the process that occurs while the student is enrolled at the university, and output is the changes that can be measured in the student after his/her university experience.

In the past, college counseling program accountability data has been collected for input and output such as number of students receiving counseling services, student characteristics, percentage of students entering the institution who graduate, and length of time from college entrance to graduation (Astin, 1993; Corrallo, 1991; Komives, et al., 1996). The environmental process area which covers the effectiveness of counseling interventions has received the least attention (Astin, 1993; Komives, et al., 1996; Schmidt, 2003; Whiston, 2000). Environmental processes include all activities provided through the university experience. Some of these processes are classroom instruction, advising, counseling, and student activities.

One reason environmental process issues have not been the focus of more research studies is that they require student responses for assessment (Komives, et al., 1996). Traditionally, researchers in college counseling areas have used output data as a measure of environmental process effectiveness (Astin, 1993). As in many social science research studies, issues of interest to college counseling researchers are often limited to post hoc studies because the phenomenon of interest could result in physical or mental harm to individuals and should not be created in an experimental setting (Cozby, 2001). Many higher education institutions use exit surveys to collect student response data about environmental processes. While exit data provides valuable information, there are two disadvantages that might affect the usefulness of exit data: 1) student recall of earlier events may be inaccurate and 2) the data may be valid for the respondents but not for future student populations. If college counselors could develop research techniques to better facilitate collection of data for program assessment, counselors would be able to provide the accountability data being requested by state legislators, higher education institutions, and the public (Komives, et al., 1996).

The primary purpose of this study was to identify factors which influence the participation of university faculty in a college counseling classroom research activity. The research activity used for this study was a survey that examined students’ perceptions of the effectiveness of financial aid counseling practices at the university. Prior to this study we had tried unsuccessfully to identify a representative sample of students who had received financial aid counseling. We could not use the university’s financial aid records to identify and contact students because student privacy is protected by the Family Educational Rights and Privacy Act of 1974.

The following objectives were developed to guide this study:

1. To determine if data collection in a university classroom setting facilitated the research process by producing timely, useable data.

2. To compare methods of communication used to solicit participation in a classroom research activity for university faculty who chose to participate in the study and university faculty who chose not to participate in the study.

3. To determine if a model exists that significantly increases the researchers’ ability to correctly classify faculty members in a research extensive university on their decision regarding participation in a classroom based research activity.



This exploratory study sought to identify effective and efficient ways to facilitate data collection from a representative sample of university students to assess the effectiveness of college counseling programs. When we planned this study we had aggregate data for our institution. After examining the aggregate data, we determined that approximately one third of the students at our institution were receiving federal financial aid (9,893 students out of 29,881 students enrolled). The Common Manual: Unified Student Loan Policy (1997) serves as the guide for financial aid policy. The Common Manual specifies that part of the financial aid administrator’s responsibilities is “Ensuring that each borrower receives adequate financial aid and debt management counseling” (8). According to Cozby’s (2001) sample size table, we needed a sample between 964 and 1,045 for a 95% confidence level with plus or minus 3% accuracy (107).


Approximately 30,000 students were enrolled in 6,000 courses from 10 academic program units during the semester that we gathered accountability data. The first step in the sample selection process was to select classes until a minimum of 1200 students were included in the sample. Only regularly scheduled courses with identified meeting times were included for consideration. Only one class per faculty member was chosen; therefore, when a class was selected as part of the sample, all of that faculty member’s other courses were removed from the sample selection process.

Because of the large sample frame size, a systematic sample with a random start and a sampling interval of 150 was used to more evenly distribute the sample over the population. An original sample of eighty-four (84) faculty members out of a pool of 1,345 faculty members was selected based on course limits in the university schedule bulletin. Five of the sample faculty members’ classes did not make for the semester. These faculty members were eliminated from the study. A final sample of seventy-nine (79) faculty were selected for the sample for this study and produced useable data.



Two instruments were used to collect data in this study: 1) a researcher designed recording form to gather data about faculty participants and 2) a survey instrument to gather responses from students.

Information about each of the faculty members included in the research sample was recorded for the following independent variables which were under investigation:

1. Academic college/school to which the faculty member belonged

2. Whether or not the faculty member agreed to participate in the classroom research activity

3. Faculty member’s gender

4. Faculty member’s rank (Professor, Associate Professor, Assistant professor, Instructor)

5. Class Enrollment Numbers

a. Course enrollment limits listed in the university schedule bulletin

b. Actual course enrollment

c. Number of students in the course who participated in the research activity

6. Types of Communication

a. Number of e-mail messages sent to the faculty member

b. Number of e-mail messages received from the faculty member

c. Whether or not the faculty member and researcher met in person

d. Number of phone calls made by researchers

e. Number of phone calls received from faculty member

The student survey instrument used in the classroom activity was developed with the help of the state financial aid office personnel. It focused on financial aid counseling practices. The survey included a demographic section (nine questions), a financial aid counseling section (24 questions), and a suggestion section (1 question).



Each member of the selected faculty sample was sent an introductory e-mail letter with an attached copy of the classroom research activity and a short synopsis of the purpose of the classroom research activity which was to collect data on the effectiveness of the institution’s financial aid counseling practices. The faculty member was asked to allow us to conduct the research activity during class time. The activity was a survey that took approximately 15 minutes to complete. Faculty members who agreed to participate in the study selected the day for administration of the activity. They also selected the time during the class time when the activity was administered. Although we expressed a preference for administering the classroom activity, faculty members could choose to administer it.

The request for faculty participation in the research study was sent by e-mail from a graduate researcher with no endorsements from administrators and with no incentives for participation. Six of the faculty members agreed to participate in the study as soon as they were asked. This was the ideal response.

Approximately one week after the initial e-mail packet was sent, a follow-up e-mail letter was sent to the 73 faculty members in the sample who had not yet responded. If faculty members did not respond to the second e-mail within one week, we made phone calls to them.

The e-mail follow-up letter was used as a phone conversation guideline. During the first call, a message was not left if the faculty member was out of the office. During the second call, a message was left for the faculty member if they were out of the office. A third call was made if the faculty member had not responded to previous calls. At this point, if there was still no response, the faculty member was eliminated from the sample. Calls were staggered between morning and afternoon to help ensure a better response rate.



Of the 79 cases examined, 32 faculty members participated in the research activity and 47 did not participate in the research activity. Descriptive data was reported for twelve variables in six categories: academic college, participation in research project, gender, rank, class enrollments, and types of communication.

The faculty sample represented nine of the ten colleges within the university. The College of Arts and Sciences had the largest representation (n=32, 40.5%). The College of Basic Sciences and the College of Business Administration also had considerable representation among the sampled faculty (n=10, 12.75%, and n=9, 11.4%, respectively). Each of the remaining colleges made up less than 10% of the research sample (Table 1). The College of Library and Information Sciences did not have a faculty member in the research study. This college does not have an undergraduate program. Their graduate program had 159 students (0.5%) enrolled for the semester the research project was conducted.


Table 1

College representation of faculty participants.


Number of Sample


Percentage of Sample




Arts and Sciences



Basic Sciences



Business Administration












Library and Information Science



Music and Dramatic Arts



Research and Graduate Studies






Of the faculty who participated in the study, there were more males (n=54, 68.4%) than females (n=25, 31.6%). Demographic data was also collected on the academic rank of faculty participants. The largest group was instructors (n=28, 35.9%), followed by professors (n=25, 32.1%), associate professors (n=15, 19.2%), and assistant professors (n=10, 12.8%).

The faculty sample was also described on the enrollment data for the course they were teaching which caused them to be selected as part of the sample. Three enrollment measurements were recorded: 1) the maximum enrollment established for the course by the department, 2) the actual official enrollment in the course, and 3) the number of students present on the day that the research activity was administered. The first two of these measurements were made for all of the faculty in the sample. The third measurement was only available for the faculty who participated in the classroom based research project. The maximum enrollment figures allowed in the courses taught ranged from a minimum of 10 to a maximum of 375 students with a mean of 52.9 (standard deviation = 64.37). Actual enrollments in the courses ranged from a minimum of one student to a maximum of 287 students with a mean of 41.1 (standard deviation = 54.12). The number of students present in class for the faculty in the study (n=32) on the day that the research activity was conducted ranged from 2 to 187 with a mean of 39.0 (Table 2).


Table 2

Class enrollment data.




Maximum Course Enrollment (n=79)

10 to 375


Actual Course Enrollment (n=79)

1 to 287


Number of Student Participants (n=32)

2 to 187


The last group of variables examined were types of communication. The number of e-mails sent by the researchers to the sample ranged from 1 to 10 with a mean of 3.63. The number of e-mails received from faculty members ranged from 0 to 4 with a mean of 1.20. Phone calls made by the researchers ranged from 0 to 5 with a mean of 1.72. Phone calls received from faculty members ranged from 0 to 1 with a mean of 0.06. There were 8 personal contacts (10.1%) between the researchers and sample faculty members.


Table 3

Types of communication used to contact sample participants.

Type of Communication



E-mails sent by researchers to sample members

1 to 10


E-mails received from sample members

0 to 4


Phone calls made by researcher to sample members

0 to 5


Phone calls received from sample members

0 to 1



Discriminant Analysis

The last objective of this study was to determine whether a model existed that significantly increased the researcher’s ability to correctly classify faculty members on whether or not they were willing to participate in a classroom research study. Discriminant analyses was selected as the statistical technique since the dependent variable, whether or not a faculty member participated in the research study, is a dichotomous variable (Klecka, 1980).

As the first step in examining the comprehensive model, the F-to-enter statistic was used to compare the two groups (Participated in Research Study and Did Not Participate in Research Study). Comparisons were made on 11 variables and the groups were found to be statistically different on 7 of the variables. In order of the magnitude of their contribution to the significant model these variables were:

1. Number of telephone calls made to the faculty member. Those receiving more calls tended not to participate.

2. The official enrollment of the course. Individuals teaching courses with larger enrollments tended to participate.

3. Whether or not the individual was a faculty member in the College of Education. College of Education faculty tended not to participate.

4. Whether or not the individual was at the academic rank of instructor. Instructors tended to participate.

5. Whether or not the researcher contacted the individual in person. Those contacted in person tended to participate.

6. Whether or not the individual was a faculty member in the College of Agriculture. College of Agriculture faculty tended to participate.

7. Whether or not the individual was a faculty member in the College of Engineering. College of Engineering faculty tended not to participate.

After comparison of the discriminating variable means was completed, we examined the independent variables included in the analysis for the presence of multicollinearity. No problems were identified.

During the last step of the discriminant analysis process, the percent of correctly classified cases was examined. To be meaningful, the model must correctly classify 62.5% of cases (a 25% improvement over chance). The comprehensive model in this study correctly classified 83.56% of the sample members.




The majority of faculty in a research extensive university asked to participate in a classroom based research activity chose not to participate. Demographic differences in faculty who did participate and those who did not participate were primarily in the area of the academic college in which they were employed. This logically would relate to their area of preparation/expertise.

A model was found that increased the researchers’ ability to accurately predict whether or not faculty would participate in classroom based research activities. Significant explanatory factors were those related to specific contact techniques employed by the researchers, academic colleges in which the faculty were employed, and enrollment in the course for which participation was sought.

Conducting the research activity in the classroom greatly facilitated our ability to collect adequate data from students in a timely manner for program assessment. During this research process the researchers collected useable data from 1,200 students who participated in the classroom research activity designed to evaluate the effectiveness of financial aid counseling techniques. Respondent demographic data was collected for the following variables: 1) Education level (Freshman, Sophomore, Junior, Senior, Master’s, and Doctorate), 2) Ethnic background, and 3) Gender. Aggregate demographic data for these variables was compared to university aggregate data on student enrollment for the same semester. The respondent sample closely matched the university enrollment for the semester (Tables 3, 4, and 5).


Table 3

Educational level of student participants .



University Percentage


Sample Percentage





































Table 4

Ethnic background of student participants.





University Percentage


Sample Percentage

Asian/Pacific Islander





Black (non-Hispanic)










American Indian





















*Three participants did not respond to this question.


Table 5

Gender of student participants.



University Percentage


Sample Percentage

















In addition to the advantage of having an adequate sample size for data to be significant, the data was collected within a six month time frame which allowed the researchers to evaluate current student financial aid counseling practices. Data could also be used to identify the focus of future counseling sessions to address students’ needs more adequately.

The small sample size of faculty members who chose to participate in the study was a limitation of this study. While the research results suggest possibilities, the sample size was not sufficient for the data to be generalized.

The research results suggest that depending upon the type of accountability data being sought for program assessment, college counseling researchers might facilitate data collection by: 1) targeting students in large general classes taught by instructors, 2) using e-mail to distribute general information, and 3) when human resources and time allow, making personal contacts to secure faculty participation in research projects. Depending upon the type of data being collected, research might further be facilitated by focusing on faculty members in colleges which have a long history of using the scientific method for conducting research.


Additional research is needed to identify specific reasons that faculty who chose not to participate made that decision. Follow-up interviews with faculty from colleges with substantially lower participation rates could be conducted to determine if systematic reasons exist for this situation.

This study also needs to be replicated at other research extensive universities to verify results. Researchers at other types of higher education institutions interested in these research techniques would need to conduct additional research studies to determine which factors are relevant for their type of institution.



Astin, A. W. (1993). Assessment for excellence. Phoenix, AZ: The Oryx Press.

Cozby, P. C. (2001). Methods in behavioral research (7th ed.). Boston: McGraw-Hill Companies, Inc.

Corrallo, S. (1991). Critical concerns in assessing selected higher-order thinking and communication skills of college graduates. Assessment Update, 3(6), 5-6.

Family Educational Rights and Privacy Act of 1974, 20 U.S.C. 1232(g) (1974).

Hayes, L. L. (2002). The death of college counseling? Counseling Today, 45(3), 12-13.

Klecka, R. W. (1980). Discriminant analysis. Beverly Hills, CA: Sage Publications.

Komives, S. R., Woodard, D. B., Jr., & Associates. (1996). Student services: A handbook for the profession (3rd ed.). San Francisco: Jossey-Bass Inc., Publishers.

Schmidt, John J. (2003). Counseling in schools: Essential services and comprehensive programs (4th ed.). Boston: Allyn and Bacon.

U.S. Department of Education. (1997). Common manual: Unified student loan policy. Washington, D.C.

Whiston, S. C. (2000). Principles and applications of assessment in counseling. Belmont, CA: Wadsworth/Thomson Learning.