College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


Leave a comment

Carter and Gallegos. Assessing Celebrations of Student Writing. CS, Spring 2017. Posted 09/03/2017.

Carter, Genesea M., and Erin Penner Gallegos. “Moving Beyond the Hype: What Does the Celebration of Student Writing Do for Students?” Composition Studies 45.1 (2017): 74-98. Web. 29 Aug. 2017.

Genesea M. Carter and Erin Penner Gallegos present research on “celebrations of student writing (CSWs)” (74), arguing that while extant accounts of these events portray them as positive and effective additions to writing programs, very little research has addressed students’ own sense of the value of the CSW experience. To fill this gap, Carter and Gallegos interviewed 23 students during a CSW at the University of New Mexico (UNM) and gathered data from an anonymous online survey (84).

As defined by Carter and Gallegos, a CSW asks students to represent the writing from their coursework in a public forum through posters and art installations (77). Noting that the nature of a CSW is contingent on the particular institution at which it takes place (75, 91), the authors provide specific demographic data about UNM, where their research was conducted. The university is both a “federally designated Hispanic Serving Institution (HSI)” and “a Carnegie-designated very high research university” (75), thus incorporating research-level expectations with a population of “historically marginalized,” “financially very needy” students with “lower educational attainment” (76). Carter and Gallegos report on UNM’s relatively low graduation rates as compared to similar universities and the “particular challenges” faced by this academic community (76).

Among these challenges, in the authors’ view, was a “negative framing of the student population from the university community and city residents” (76). Exposure in 2009 via a meeting with Linda Adler-Kassner to the CSW model in place at Eastern Michigan University led graduate students Carter and Gallegos to develop a similar program at UNM (76-77). Carter and Gallegos were intrigued by the promise of programs like the one at EMU to present a new, positive narrative about students and their abilities to the local academic and civic communities.

They recount the history of the UNM CSW as a project primarily initiated by graduate students that continues to derive from graduate-student interests and participation while also being broadly adopted by the larger university and in fact the larger community (78, 92). In their view, the CSW differs from other institutional showcases of student writing such as an undergraduate research day and a volume of essays selected by judges in that it offers a venue for “students who lack confidence in their abilities or who do not already feel that they belong to the university community” (78). They argue that changing the narrative about student writing requires a space for recognizing the strengths of such historically undervalued students.

Examining CSWs from a range of institutions in order to discover what the organizers believe these events achieve, the authors found “a few commonalities” (79). Organizers underscored their belief that the audience engagement offered by a CSW enforced the nature of writing as “social, situational, and public,” a “transactional” experience rather than the “one-dimensional” model common in academic settings (80). Further, CSWs are seen to endorse student contributions to research across the university community and to inspire recognition of the multiple literacies that students bring to their academic careers (81). The authors’ review also reveals organizers’ beliefs that such events will broaden students’ understanding of the writing process by foregrounding how writing evolves through revision into different modes (81).

An important thread is the power of CSWs to enhance students’ “sense of belonging, both to an intellectual and a campus community” (82). Awareness that their voices are valued, according to the authors’ research, is an important factor in student persistence among marginalized populations (81). Organizers see CSWs as encouraging students to see themselves as “authors within a larger community discourse” (83).

Carter and Gallegos note a critique by Mark Mullen, who argues that CSWs can actually exploit student voices in that they may actually be a “celebration of the teaching of writing, a reassertion of agency by practitioners who are routinely denigrated” (qtd. in Carter and Gallegos 84). The authors find from their literature review that, indeed, few promotions of CSWs in the literature include student voices (84). They contend that their examination of student perceptions of the CSW process can further understanding of the degree to which these events meet their intended outcomes (84).

Their findings support the expectation that students will find the CSW valuable, but discovered several ways in which the hopes of supporters and the responses of students are “misaligned” (90). While the CSW did contribute to students’ sense of writing as a social process, students expressed most satisfaction in being able to interact with their peers, sharing knowledge and experiencing writing in a new venue as fun (86). Few students understood how CSW connected to the goals of their writing coursework, such as providing a deeper understanding of rhetorical situation and audience (87). While students appreciated the chance to “express” their views, the authors write that students “did not seem to relate expression to being heard or valued by the academic community” or to “an extension of agency” (88).

For the CSW to more clearly meet its potential, the authors recommend that planners at all levels focus on building metacognitive awareness of the pedagogical value of such events through classroom activities (89). Writing programs involved in CSWs, according to the authors, can develop specific outcomes beyond those for the class as a whole that define what supporters and participants hope the event will achieve (89-90). Students themselves should be involved in planning the event as well as determining its value (90), with the goal of “emphasizing to their student participants that the CSW is not just another fun activity but an opportunity to share their literacies and voices with their classmates and community” (90).

A more detailed history of the development of the UNM event illustrates how the CSW became increasingly incorporated into other university programs and how it ultimately drew participation from local artists and performers (92-93). The authors applaud this “institutionalizing” of the event because such broad interest and sponsorship mean that the CSW can continue to grow and spread knowledge of student voices to other disciplines and across the community (93).

They see “downsides” in this expansion in that the influence of different sponsors from year to year and attachment to initiatives outside of writing tends to separate the CSW from the writing courses it originated to serve. Writing programs in venues like UNM may find it harder to develop appropriate outcomes and assess results, making sure that the CSW remains a meaningful part of a writing program’s mission (93). The authors recommend that programs hoping that a CSW will enhance actual writing instruction should commit adequate resources and attention to the ongoing events. The authors write that, “imperatively,” student input must be part of the process in order to prevent such events from “becom[ing] merely another vehicle for asserting the value of the teaching of writing” (94; emphasis original).