College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Ray et al. Rethinking Student Evaluations of Teaching. Comp Studies Spring 2018. Posted 08/25/2018.

Ray, Brian, Jacob Babb, and Courtney Adams Wooten. “Rethinking SETs: Retuning Student Evaluations of Teaching for Student Agency.” Composition Studies 46.1 (2018): 34-56. Web. 10 Aug. 2018.

Brian Ray, Jacob Babb, and Courtney Adams Wooten report a study of Student Evaluations of Teaching (SETs) across a range of institutions. The researchers collected 55 different forms, 45 of which were institutions’ generic forms, while 10 were designed specifically for writing classes. They coded 1,108 different questions from these forms in order to determine what kinds of questions were being asked (35).

The authors write that although SETs and their use, especially in personnel decisions, is of concern in rhetoric and composition, very little scholarship in the field has addressed the issue (34-35). They summarize a history of student evaluations as tools for assessment of teachers, beginning with materials from the 1920s. Early SETs focused heavily on features of personality such as “wit,” “tact,” and “popularity” (38), as well as physical appearance (39). This focus on “subjective” characteristics of teachers asked students to judge “factors that neither they nor the instructor had sole control over and that they could do little to affect” (38).

This emphasis persisted throughout twentieth century. A scholar named Herbert Marsh conducted “numerous studies” in the 1970s and 1980s and eventually created the Student Evaluation of Education Quality form (SEEQ) in 1987 (35). This instrument asked students about nine features:

[L]earning, enthusiasm, organization and clarity, group interaction, individual rapport, breadth of coverage, tests and grading, assignments, and difficulty (39)

The authors contend that these nine factors substantively guide the SETs they studied (35), and they claim that, in fact, in important ways, “current SET forms differ little from those seen in the 1920s” (40).

Some of composition’s “only published conversations about SETs” revolved around workshops conducted by the Conference on College Composition and Communication (CCCC) from 1956 through 1962 (39). The authors report that instructors participating in these discussions saw the forms as most appropriate for “formative” purposes; very few institutions used them in personnel matters (39).

Data from studies of SETs in other fields reveal some of the problems that can result from common versions of these measures (37). The authors state that studies over the last ten years have not been able to link high teacher ratings on SETs with improved student learning or performance (40). Studies point out that many of the most common categories, like “clarity and fairness,” remain subjective, and that students consistently rank instructors on personality rather than on more valid measures of effectiveness (41).

Such research documents bias related to gender and ethnicity, with female African-American teachers rated lowest on one study asking students to assess “a hypothetical curriculum vitae according to teaching qualifications and expertise” (42). Male instructors are more commonly praised for their “ability to innovate and stimulate critical thought”; women are downgraded for failing to be “compassionate and polite” (42). Studies showed that elements like class size and workload affected results (42). Physical attractiveness continues to influence student opinion, as does the presence of “any kind of reward,” like lenient grading or even supplying candy (43).

The authors emphasize their finding that a large percentage of the questions they examined asked students about either some aspect of the teacher’s behavior (e.g., “approachability,” “open-mindedness” [42]) or what the teacher did (“stimulated my critical thinking” [45]). The teacher was the subject of nearly half of the questions (45). The authors argue that “this pattern of hyper-attention” (44) to the teacher casts the teacher as “solely responsible” for the success or failure of the course (43). As a result, in the authors’ view, students receive a distorted view of agency in a learning situation. In particular, they are discouraged from seeing themselves as having an active role in their own learning (35).

The authors contend that assigning so much agency to a single individual runs counter to “posthumanist” views of how agency operates in complex social and institutional settings (36). In this view, many factors, including not only all participants and their histories and interests but also the environment and even the objects in the space, play a part in what happens in a classroom (36). When SET questions fail to address this complexity, the authors posit, issues of validity arise when students are asked to pass judgment on subjective and ambiguously defined qualities as well as on factors beyond the control of any participant (40). Students encouraged to focus on instructor agency may also misjudge teaching that opts for modern “de-center[ed]” teaching methods rather than the lecture-based instruction they expect (44).

Ray et al. note that some programs ask students about their own level of interest and willingness to participate in class activities and advocate increased use of such questions (45). But they particularly advocate replacing the emphasis on teacher agency with questions that encourage students to assess their own contributions to their learning experience as well as to examine the class experience as a whole and to recognize the “relational” aspects of a learning environment (46). For example:

Instead of asking whether instructors stimulated critical thought, it seems more reasonable to ask if students engaged in critical thinking, regardless of who or what facilitated engagement. (46; emphasis original)

Ray et al. conclude that questions that isolate instructors’ contributions should lean toward those that can be objectively defined and rated, such as punctuality and responding to emails in a set time frame (46).

The authors envision improved SETs, like those of some programs, that are based on a program’s stated outcomes and that ask students about the concepts and abilities they have developed through their coursework (48). They suggest that programs in institutions that use “generic” evaluations for broader analysis or that do not allow individual departments to eliminate the official form should develop their own parallel forms in order to gather the kind of information that enables more effective assessment of classroom activity (48-49).

A major goal, in the authors’ view, should be questions that “encourage students to identify the interconnected aspects of classroom agency through reflection on their own learning” (49).

 


Leave a comment

Wooten et al. SETs in Writing Classes. WPA, Fall 2016. Posted 02/11/2016.

Wooten, Courtney Adams, Brian Ray, and Jacob Babb. “WPAs Reading SETs: Toward an Ethical and Effective Use of Teaching Evaluations.” Journal of the Council of Writing Program Administrators 40.1 (2016): 50-66. Print.

Courtney Adams Wooten, Brian Ray, and Jacob Babb report on a survey examining the use of Student Evaluations of Teaching (SETs) by writing program administrators (WPAs).

According to Wooten et al., although WPAs appear to be dissatisfied with the way SETs are generally used and have often attempted to modify the form and implementation of these tools for evaluating teaching, they have done so without the benefit of a robust professional conversation on the issue (50). Noting that much of the research they found on the topic came from areas outside of writing studies (63), the authors cite a single collection on using SETs in writing programs by Amy Dayton that recommends using SETs formatively and as one of several measures to assess teaching. Beyond this source, they cite “the absence of research on SETs in our discipline” as grounds for the more extensive study they conducted (51).

The authors generated a list of WPA contact information at more than 270 institutions, ranging from two-year colleges to private and parochial schools to flagship public universities, and solicited participation via listservs and emails to WPAs (51). Sixty-two institutions responded in summer 2014 for a response rate of 23%; 90% of the responding institutions were four-year institutions.

Despite this low response rate, the authors found the data informative (52). They note that the difficulty in recruiting faculty responses from two-year colleges may have resulted from problems in identifying responsible WPAs in programs where no specific individual directed a designated writing program (52).

Their survey, which they provide, asked demographic and logistical questions to establish current practice regarding SETs at the responding institutions as well as questions intended to elicit WPAs’ attitudes toward the ways SETs affected their programs (52). Open-ended questions allowed elaboration on Likert-scale queries (52).

An important recurring theme in the responses involved the kinds of authority WPAs could assert over the type and use of SETs at their schools. Responses indicated that the degree to which WPAs could access student responses and could use them to make hiring decisions varied greatly. Although 76% of the WPAs could read SETS, a similar number indicated that department chairs and other administrators also examined the student responses (53). For example, in one case, the director of a first-year-experience program took primary charge of the evaluations (53). The authors note that WPAs are held accountable for student outcomes but, in many cases, cannot make personnel decisions affecting these outcomes (54).

Wooten et al. report other tensions revolving around WPAs’ authority over tenured and tenure-track faculty; in these cases, surveyed WPAs often noted that they could not influence either curricula nor course assignments for such faculty (54). Many WPAs saw their role as “mentoring” rather than “hiring/firing.” The WPAs were obliged to respond to requests from external authorities to deal with poor SETs (54); the authors note a “tacit assumption . . . that the WPA is not capable of interpreting SET data, only carrying out the will of the university” (54). They argue that “struggles over departmental governance and authority” deprive WPAs of the “decision-making power” necessary to do the work required of them (55).

The survey “revealed widespread dissatisfaction” about the ways in which SETs were administered and used (56). Only 13% reported implementing a form specific to writing; more commonly, writing programs used “generic” forms that asked broad questions about the teacher’s apparent preparation, use of materials, and expertise (56). The authors contend that these “indirect” measures do not ask about practices specific to writing and may elicit negative comments from students who do not understand what kinds of activities writing professionals consider most beneficial (56).

Other issues of concern include the use of online evaluations, which provide data that can be easily analyzed but result in lower participation rates (57). Moreover, the authors note, WPAs often distrust numerical data without the context provided by narrative responses, to which they may or may not have access (58).

Respondents also noted confusion or uncertainty about how an institution determines what constitutes a “good” or “poor” score. Many of these decisions are determined by comparing an individual teacher’s score to a departmental or university-wide average, with scores below the average signaling the need for intervention. The authors found evidence that even WPAs may fail to recognize that lower scores can be influenced not just by the grade the student expects but also by gender, ethnicity, and age, as well as whether the course is required (58-59).

Wooten et al. distinguish between “teaching effectiveness,” a basic measure of competence, and “teaching excellence,” practices and outcomes that can serve as benchmarks for other educators (60). They note that at many institutions, SETs appear to have little influence over recognition of excellence, for example through awards or commendations; classroom observations and teaching portfolios appear to be used more often for these determinations. SETs, in contrast, appear to have a more “punitive” function (61), used more often to single out teachers who purportedly fall short in effectiveness (60).

The authors note the vulnerability of contingent and non-tenure-track faculty to poorly implemented SETs and argue that a climate of fear occasioned by such practices can lead to “lenient grading and lowered demands” (61). They urge WPAs to consider the ethical implications of the use of SETs in their institutions.

Recommendations include “ensuring high response rates” through procedures and incentives; clarifying and standardizing designations of good and poor performance and ensuring transparency in the procedures for addressing low scores; and developing forms specific to local conditions and programs (61-62). Several of the recommendations concern increasing WPA authority over hiring and mentoring teachers, including tenure-track and tenured faculty. Wooten et al. recommend that all teachers assigned to writing courses administer writing-specific evaluations and be required to act on the information these forms provide; the annual-report process can allow tenured faculty to demonstrate their responsiveness (62).

The authors hope that these recommendations will lead to a ‘disciplinary discussion” among WPAs that will guide “the creation of locally appropriate evaluation forms that balance the needs of all stakeholders—students, teachers, and administrators” (63).