History of the Survey of Assessment Culture

Overview and History of the Surveys of Assessment Culture

During the past two decades there has been an increasing frequency and prominence with which the term “culture of assessment” has been used. Some of the earliest scholarly references to the idea that a higher education institution carries a set of beliefs that support or hinder the use of evidence in its multitude of decision-making settings (the definition used for this line of research) include Harvey and Knight (1996), Hutchings (1990), and Tierney and Rhoads (1995), to name a few. In broader contexts, a “culture of…” has come to dominate many facets of society including K-12 education, health care, military, government, and business.  However, for commanding as much attention in higher education scholarly discourse as it does, the notion of a culture of assessment has remained largely conjectural and understudied (Baas, Rhoads, & Thomas, 2016) and detached from the larger scholarly discourses of educational leaderhsip and  organizational development (Haviland, 2014).  In particular, empirical, sustained studies are needed to further research in this area (Fuller, Skidmore, Bustamante, & Holzweiss, 2016; Kuh & Ikenberry, 2009; Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Ndoye, 2008)

To fill this gap, Dr. Matthew Fuller began collecting data on the concept of a culture of assessment as early as 2005. Initial surveys were “one off” surveys with questions that changed from administration to administration as discussions and conference proceedings piqued Dr. Fuller’s interest.  However, by 2007 a set of regularly used (but not expertly reviewed or validated) questions emerged.  Also, initial surveys called upon samples of convenience, professional listserves, and contacts within Dr. Fuller’s professional network. 

In 2011, Dr. Fuller accepted a tenure-track position at Sam Houston State University and the Surveys began a period tremendous development with contributions from Drs. Susan Skidmore, Rebecca Bustamante, Peggy Holzweiss. Moreover, the instruments were reviewed numerous times by the newly formed Council of Scholars.  Formed in 2013 as an advisory panel for the research, members offer advice and provide expert reviews of the instruments, data and publications. The overall purpose of the Surveys has always been to provide institutions of higher education with useful, meaningful data on the factors that influence how they use data in a variety of common decision making settings.  The Surveys spur cross-organizational dialogue into how assessment is perceived and practiced and what factors influence assessment at an institution. 

With this purpose in mind and to ground the research in this definition of a culture of assessment, a scholarship-based conceptual framework was developed across 2010 and 2011. At this time Maki’s (2010) Principles of an Inclusive Commitment to Assessment were chosen as the guiding theory for the survey as it examines factors influencing an institution-wide commitment to assessment and relies upon anchors or symbols and structures influencing cultures of assessment.  A copy of the conceptual framework is available online.

Starting in 2011, an intense year-long literature review was conducted by Dr. Fuller, Dr. Skidmore, Dr. Bustamante, and Dr. Holzweiss with the purpose of refining the instruments. The resulting themes from this review served as the framework for the instrument refinement process.  First, three cultures were expressly mentioned in the literature pertaining to assessment: a) culture of assessment (focused on student learning improvement), b) culture of fear, and c) culture of compliance.  Cultures of inquiry and evidence were not widely recognized in 2011 scholarship on higher education assessment.  Moreover, six themes were found in the literature on cultures of assessment in 2011: a) Leadership, which the team considered to be a higher order factor cutting across all other constructs, b) Faculty Perceptions, c) Use of Data, d) Sharing, e) Compliance or Fear Motivators, and f)Normative Purposes for Assessment.  Considerable discussion was given to a theme related to the connection of an institutional culture of assessment to institutional cultures of learning and teaching.  However, this theme and the Leadership theme were considered higher order structures.  As such they were integrated throughout the instrument. 

Throughout late 2011 and 2012, an interdisciplinary team of faculty developed and/or refined question stems for the Administrators and Faculty Surveys and surveys closely resembling the current surveys emerged. The surveys are designed around 3 sections: a) Introductory statements (i.e. Statement of Informed Consent; Definitions), b)Assessment Culture Scales [consisting of the themes from the literature review], and c) Concluding questions (i.e. Demographic questions).

In 2012, 19 noted assessment scholars were invited to review both the Administrators and Faculty Surveys. Improvements, such as rewording and addition or deletion of questions, were made following their advice.  Moreover, experts were critical in determining what kinds of information could be shared with institutions.  In 2012, the first Information Sharing Agreement and contact file templates were reviewed and approved by SHSU’s Office of General Counsel.  This support, along with the development of a basic website (www.shsu.edu/assessmentculture), considerably improved survey administration.

A small, single institution pilot study of the Administrators Survey of Assessment Culture was conducted in late 2012-early 2013. The primary purpose of this small pilot was to ensure the survey system was working efficiently.  The first nation-wide pilot of the Administrators Survey of Assessment Culture was conducted in early 2013.  Nineteen institutions participated in the pilot and institutional research and assessment directors were asked to provide feedback on the survey at two points: a) after they completed the survey, and b) after they received data.  Improvements were made following their feedback.  At this point, the instruments currently used had emerged and copyrights were retained.  In conjunction with the pilot of the Administrators Survey of Assessment Culture, the Faculty Survey of Assessment Culture was also piloted in the same fashion and with the same participating institutions.  After participation, an interdisciplinary team of faculty members (usually assessment committee members) provided feedback and improvements were made.

Validation and Reliability Confirmation Efforts

The 2012-2013 pilot studies allowed for Exploratory and Confirmatory Factor Analysis studies to serve as validation efforts. In general, the factors expected were measured by the instruments though the connection to teaching and learning was not indicated as a higher order factor.  Leadership was indeed a higher order factor.  Reliability coefficients for each construct measured have been strong and never below Nunnaly’s (1978) accepted threshold of α ≥ 0.7.  These validation efforts for the Administrators and Faculty Surveys allowed for improvements to be made, specifically in the ordering of questions and reduction of instrument length (though more consideration of this issue is needed).  Published articles can be found online on our Research Page.

The first nation-wide, tandem administration of the Administrators and Faculty Surveys of Assessment Culture occurred in the summer 2013 and fall 2013. Since then, the Administrators Survey and the Faculty Survey have been administered, in tandem, each spring and fall, respectively.

A second round of factor analysis in 2014 allowed for additional consideration of items that have consistently been retained in constructs as items of importance. These analyses would allow the instruments to be considerably reduced in length, though this reduction has not yet occurred as faculty also pondered the addition of a few new questions related to teaching and learning.

Beginning in 2015, data from the Surveys began contributing to publications on the culture of assessment in higher education. Since their inception over 1,100 institutions of higher education and nearly 6,000 individuals have participated in the Administrators of Faculty Surveys of Assessment Culture.

Student Affairs Survey of Assessment Culture

In 2013, a group of student affairs practitioners approached Dr. Fuller about the use of the Surveys of Assessment Culture in student affairs contexts.  Members of the Council of Scholars agreed that the existing Administrators and Faculty Surveys sufficiently represented constructs noted in student affairs assessment as well, but that slight language modifications would be needed to fit unique contexts of student affairs.  The team reasoned that measuring similar constructs in student affairs would allow for useful comparisons across organizational units and would allow for discussions about how students affairs relates to the broader institutional context.  Therefore the team augmenting existing instruments to support the development of an instrument for use in student affairs. 

In the summer 2014, a pilot of 9 institutions was conducted. Mid-manager or higher level student affairs practitioners were invited to participate in the study. The student affairs assessment contacts at these institutions were asked to seek feedback from their colleges and contribute to a third round of reflections.  Data from this pilot study were used to in a confirmatory factor analysis project wherein the factors structure of the instrument was adequately confirmed.  However, given the small sample size, additional confirmation is needed.  Results from this study are have been accepted for publication and will be available shortly.  During the spring 2016 semester, the first nation-wide administration of the Student Affairs Survey was conducted.  Over 140 institutions have participated in the survey as of Sept. 2017.

 

 

Recent Developments

The surveys have enjoyed strong responses. Publication and presentation of results have been positively received.  Many questions and comments are received each week via the survey’s email address assessmentculture@shsu.edu. Many publishers have requested publications using survey data.  In 2015 researchers from 5 countries outside of the U.S. (Australia/New Guinea, Japan, China, Qatar, and the U.K.) asked if the instrument could be adapted to their country’s needs.  Nine institutions (3 in the HLC, 1 in WASC, and 5 in SACS) have used the surveys in accreditation efforts.  One institution has used the Faculty Survey of Assessment Culture to secure $1.6 million in funding from the Institute of Education Sciences. Future developments are updated regularly on ww.shsu.edu/assessmentculture

The following population samples and timelines are in place for each survey. Additional information about the unit of analysis and response rate ranges are also included.  Links to surveys are password protected files. The password can be obtained by emailing assessmentculture@shsu.edu.

Instrument

(Link to survey)

Population Sample

Timeline

Unit of Analysis

Response Rate Ranges

Administrators Survey of Assessment Culture

A nation-wide, stratified (geographic region, institution’s profit status, degree type offered) sample of institutional research or assessment directors; usually about 555 directors are surveyed.

Spring semester (February-April)

Summer semester (May-July)

Fall semester (August-October)

Institution; the director serves as the representative for the institution’s culture of assessment

Ranged from 29% to 66%

Faculty Survey of Assessment Culture

Tandem administration with the Administrators Survey; Administrators elect to survey all faculty teaching a credit generating course in the semester of survey administration

Spring semester (February-April)

Summer semester (May-July)

Fall semester (August-October)

Institution

Range from 6% to 51%

Student Affairs Survey of Assessment Culture

Institutions can select who they consider “mid-manager or higher” student affairs leaders in their division

Spring semester (February-April)

Summer semester (May-July)

Fall semester (August-October)

Division of Student Affairs

Ranged 36 to 39% in two pilot administrations

 

Works Cited

Baas, L., Rhoads, J. C., & Thomas, D. (2016). Are quests for a “culture of assessment” mired in a “culture war” over assessment? A Q-methodological inquiry. Sage Open, 1-17. doi:10.1177/2158244015623591

Fuller, M., Skidmore, S., Bustamante, R., & Holzweiss, P. (2016). Empirically exploring higher education cultures of assessment. The Review of Higher Education, 39(3), 395-429. doi:10.1353/rhe.2016.0022

Harvey, L., & Knight, P. (1996). Transforming higher education. Ballmoor, UK: Open University Press.

Haviland, D. (2014). Beyond compliance: How organizational theory can help leaders unleash the potential of assessment. Community College Journal of Research and Practice, 38(9), 755-765. doi:0.1080/10668926.2012.711144

Hutchings, P. (1990, June). Assessment and the way we work. Assessment Forum, pp. 12-14.

Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in US colleges and universities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.

Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana-Champagne: National Institute for Learning Outcomes Assessment.

Ndoye, A. (2008, February 26). Culture of assessment survey results. Wilmington, North Corlina: University of North Carolina Wilmington Press.

Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.

Tierney, W. G., & Rhoads, R. A. (1995). The culture of assessment. In J. Smyth, Academic work: The challenging labour process in higher education (pp. 99-111). Buckingham: Open University Press.