College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


Leave a comment

Litterio, Lisa M. Contract Grading: A Case Study. J of Writing Assessment, 2016. Posted 04/20/2017.

Litterio, Lisa M. “Contract Grading in a Technical Writing Classroom: A Case Study.” Journal of Writing Assessment 9.2 (2016). Web. 05 Apr. 2017.

In an online issue of the Journal of Writing Assessment, Lisa M. Litterio, who characterizes herself as “a new instructor of technical writing,” discusses her experience implementing a contract grading system in a technical writing class at a state university in the northeast. Her “exploratory study” was intended to examine student attitudes toward the contract-grading process, with a particular focus on how the method affected their understanding of “quality” in technical documents.

Litterio’s research into contract grading suggests that it can have the effect of supporting a process approach to writing as students consider the elements that contribute to an “excellent” response to an assignment. Moreover, Litterio contends, because it creates a more democratic classroom environment and empowers students to take charge of their writing, contract grading also supports critical pedagogy in the Freirean model. Litterio draws on research to support the additional claim that contract grading “mimic[s] professional practices” in that “negotiating and renegotiating a document” as students do in contracting for grades is a practice that “extends beyond the classroom into a workplace environment.”

Much of the research she reports dates to the 1970s and 1980s, often reflecting work in speech communication, but she cites as well models from Ira Shor, Jane Danielewicz and Peter Elbow, and Asao Inoue from the 2000s. In a common model, students can negotiate the quantity of work that must be done to earn a particular grade, but the instructor retains the right to assess quality and to assign the final grade. Litterio depicts her own implementation as a departure from some of these models in that she did make the final assessment, but applied criteria devised collaboratively by the students; moreover, her study differs from earlier reports of contract grading in that it focuses on the students’ attitudes toward the process.

Her Fall 2014 course, which she characterizes as a service course, enrolled twenty juniors and seniors representing seven majors. Neither Litterio nor any of the students were familiar with contract grading, and no students withdrew on learning from the syllabus and class announcements of Litterio’s grading intentions. At mid-semester and again at the end of the course, Litterio administered an anonymous open-ended survey to document student responses. Adopting the role of “teacher-researcher,” Litterio hoped to learn whether involvement in the generation of criteria led students to a deeper awareness of the rhetorical nature of their projects, as well as to “more involvement in the grading process and more of an understanding of principles discussed in technical writing, such as usability and document design.”

Litterio shares the contract options, which allowed students to agree to produce a stated number of assignments of either “excellent,” “great,” or “good” quality, an “entirely positive grading schema” that draws on Frances Zak’s claim that positive evaluations improved student “authority over their writing.”

The criteria for each assignment were developed in class discussion through an open voting process that resulted in general, if not absolute, agreement. Litterio provides the class-generated criteria for a resumé, which included length, format, and the expectations of “specific and strong verbs.” As the instructor, Litterio ultimately decided whether these criteria were met.

Mid-semester surveys indicated that students were evenly split in their preferences for traditional grading models versus the contract-grading model being applied. At the end of the semester, 15 of the 20 students expressed a preference for traditional grading.

Litterio coded the survey responses and discovered specific areas of resistance. First, some students cited the unfamiliarity of the contract model, which made it harder for them to “track [their] own grades,” in one student’s words. Second, the students noted that the instructor’s role in applying the criteria did not differ appreciably from instructors’ traditional role as it retained the “bias and subjectivity” the students associated with a single person’s definition of terms like “strong language.” Students wrote that “[i]t doesn’t really make a difference in the end grade anyway, so it doesn’t push people to work harder,” and “it appears more like traditional grading where [the teacher] decide[s], not us.”

In addition, students resisted seeing themselves and their peers as qualified to generate valid criteria and to offer feedback on developing drafts. Students wrote of the desire for “more input from you vs. the class,” their sense that student-generated criteria were merely “cosmetics,” and their discomfort with “autonomy.” Litterio attributes this resistance to the role of expertise to students’ actual novice status as well as to the nature of the course, which required students to write for different discourse communities because of their differing majors. She suggests that contract grading may be more appropriate for writing courses within majors, in which students may be more familiar with the specific nature of writing in a particular discipline.

However, students did confirm that the process of generating criteria made them more aware of the elements involved in producing exemplary documents in the different genres. Incorporating student input into the assessment process, Litterio believes, allows instructors to be more reflective about the nature of assessment in general, including the risk of creating a “yes or no . . . dichotomy that did not allow for the discussions and subjectivity” involved in applying a criterion. Engaging students throughout the assessment process, she contends, provides them with more agency and more opportunity to understand how assessment works. Student comments reflect an appreciation of having a “voice.”

This study, Litterio contends, challenges the assumption that contract grading is necessarily “more egalitarian, positive, [and] student-centered.” The process can still strike students as biased and based entirely on the instructor’s perspective, she found. She argues that the reflection on the relationship between student and teacher roles enabled by contract grading can lead students to a deeper understanding of “collective norms and contexts of their actions as they enter into the professional world.”


Leave a comment

Goldblatt, Eli. Expressivism as “Tacit Tradition.” CCC, Feb. 2017. Posted 03/15/2017.

Goldblatt, Eli. “Don’t Call It Expressivism: Legacies of a ‘Tacit Tradition’.” College Composition and Communication 68.3 (2017): 438-65. Print.

Eli Goldblatt explores what he considers the “subtle legacies” (442) of a “much maligned movement” in composition studies, expressivism (439). His locates his exigency in conversations about the value of a “literacy autobiography” he recently published. These discussions led him to believe that this form of writing did not meet his colleagues’ definition of respectable academic work (438-39).

For Goldblatt, expressivist tendencies may be rejected by theorists but persist in much recent work in the field, creating what Christopher Burnham and Rebecca Powell call a “tacit tradition” within the field (qtd. in Goldblatt 440). Goldblatt argues that recognizing the value and influence of expression will lead to a sense of writing that more fully integrates important aspects of what actually inspires writers.

Graduate students, he reports, often learn about expressivism via the scholarly debate between David Bartholomae and Peter Elbow in 1989 and 1991; such theoretical work cast personal expression as too grounded in the individual and “lacking in a political analysis of the composing situation in schools” (440).

Yet, Goldblatt observes, students often prefer “personal writing,” which they may consider “relatable” (439); his graduate students exhibit interest in the role of the personal in literacy activities in their own research (440). He posits, with Burnham and Powell, that the research from the 1970s by James Britton and his associates reveals “some sort of Ur-expressive drive [that] stands behind all writing” (440).

Goldblatt traces overt strands of expressivism through the work of such scholars as Sherrie Gradin and Wendy Bishop (440-41). He posits that some resistance to expressivism in composition may be traceable to concerns about the kind of research that would lead to tenure and promotion as the field began to define itself within departments heavily populated by literary critics (445). He notes “two stigmas” attached to expressivism: one is its centrality to high-school pedagogy; in its effort to establish itself as a respectable college-level endeavor, composition distanced itself from methods practiced in K-12 (446). Similarly, the field set itself apart from creative writing, in which, Goldplatt recounts, instruction in his experience emphasized “aesthetic achievement rather than self-actualization” (447).

Wendy Bishop, who characterized herself as “something-like-an-expressivist” (qtd. in Goldblatt 448), subsequently became CCCC chair. Goldblatt notes her defense of her pedagogy against the claim that expressivism

keep[s] students in a state of naiveté, [doesn’t] prepare them for the languages of  the academy, . . . and “emphasize[s] a type of self-actualization which the outside world would indict as sentimental and dangerous.” (Bishop, qtd. in Goldblatt 447-48; quoting from Stephen M. Fishman and Lucille Parkinson McCarthy)

Still, Goldblatt contends, her stance was “more admired than imitated” (448), doing little to recuperate expressivism within the field.

Despite his own commitment to poetry, Goldblatt acknowledges the importance of composition’s “social turn” and the power of the “social-epistemic rhetoric” promulgated by James Berlin and others. Still, he finds the rejection of expressivism problematic in recent movements in college writing such as the focus on transfer and the “writing about writing” program advocated by scholars like Elizabeth Wardle and Doug Downs. Goldblatt worries that too much emphasis on “school success and professional preparation” (441) undercuts “two  impulses” that he posits underlie the need to write: “the desire to speak out of your most intimate experiences and to connect with communities in need” (442).

Goldblatt examines “habits of mind” that he associates with expressivism in the recent work of four scholars who, he believes, would not explicitly call themselves expressivists (443). In Goldblatt’s view, Robert Yagelski’s Writing as a Way of Being “seems both anchored in and estranged from expressivism” (448). Yagelski’s focus on “the ‘writer writing’ rather than the ‘writer’s writing’” seems to Goldblatt a “phenomenological” approach to composing (448) that values the social impact of relationships at the same time it encourages individual self-actualization (448). Goldblatt compares Yagelski’s views to Ken Macrorie’s in his 1970 book Uptaught in that both reject “standardized instruction” in favor of “writing as a means to explore and enrich experience” (450), undoing a “false binary” between writing for the self and writing to engage with the world (448).

In Adam Banks’s Digital Griots, Goldblatt finds the personal entering through voice and style that both invoke the African-American tradition while “consciously modeling that social boundaries everywhere must be crossed” (451). Banks recounts “personal testimony” from young African Americans for whom individual storytelling establishes solidarity while creating connections with the past (452). Goldblatt notes that unlike early expressivists, Banks rejects the sense that “all expression is drawn from the same well” (453). Instead, he “remixes” many different individual voices to generate an implicit expressivism as “a deep and dialogic commitment to the individual within the swirl of events, movements, and economic pressures” (453-54).

Tiffany Rousculp’s Rhetoric of Respect recounts her creation and administration of the Community Writing Center at Salt Lake City Community College (454). Goldblatt finds Rousculp addressing tensions between progressive Freirean motives and her recognition that community members from a wide range of backgrounds would have personal reasons for writing that did not accord with the specific goals of the “sponsoring institution” (455). Although honoring these individual goals may seem antithetical to a social-epistemic approach, Goldblatt writes that the Center’s orientation remained deeply social because, in his view of Rousculp’s understanding, “individuals can only be seen within the web of their relationships to others” (456). Only when able to escape the constraints of the various institutions controliing their lives and select their own reasons for writing, Goldblatt posits, can individuals “exert agency” (456).

Sondra Perl’s On Austrian Soil depicts a teaching experience in which she worked with native Austrian writers to explore the legacy of the country’s Nazi past. Stating that he connects Perl not so much with early expressivism as with the origins of the process movement (458), Goldblatt notes her interest in the “personal, even bodily, experience of composing” (457). In his view, her experience in Austria, though painful in many ways, highlights the ways in which students’ emotional positioning, which can both inspire and limit their ability to write, must often become a teacher’s focus (458). Moreover, Goldblatt stresses, the learning both for individuals and the group arose from the shared emotions, as Perl connects what she called each student’s “wonderful uniqueness” (qtd. in Goldblatt 459) with “the socially oriented responsibility” of ethical behavior (459).

Goldblatt hopes for an understanding within composition of how a sophisticated approach to expressivism can infuse writing with the “intentionality, joy, seriousness, and intimacy available in the act of writing” (461). He worries that the writing-about-writing agenda “elevates the study of writing over the experience of writing,” an agenda perhaps appropriate for more advanced writing majors but complicit in what he sees as higher education’s current “hostility toward intellectual play and exploration” in the service of completely managed institutional priorities. He proposes that recognizing the power of expressivism can fuel compositionists’ hopes that students will embrace writing:

Without an urgency that is felt as personal, a writer will always be looking to the teacher, the boss, the arbiter for both permission to begin and approval to desist. (461)


Leave a comment

Sumpter, Matthew. Linked Creative Writing-Composition Courses. CE, Mar. 2016. Posted 05/01/2016.

Sumpter, Matthew. “Shared Frequency: Expressivism, Social Constructionism, and the Linked Creative Writing-Composition Class.” College English 78.4 (2016): 340-61. Print.

Matthew Sumpter advocates for “tandem” creative-writing and composition courses as first-year curricula. To support this claim, he examines the status of both composition and creative writing in the academy through the “dual metrics” of expressivism and social constructionism (341).

Sumpter characterizes the two types of writing classes as separate enterprises, describing creative writing as “an almost anti-academic endeavor” (Tim Mayers, qtd. in Sumpter 340), exhibiting a “lack of reflectiveness about what, how, and why one teaches creative writing” (340). He portrays composition, in contrast, as highly theorized and “characterized by a greater dedication to informed pedagogy” (340). He contends that both areas would benefit from increased communication: creative writing could draw on composition’s stronger critical and theoretical grounding while composition would be able to offer students more “tools with which to manipulate language’s rhythm, pace, sound, and appearance” (340).

He locates the roots of expressivism and social constructivism respectively in the work of Peter Elbow and David Bartholomae. In Sumpter’s view, Elbow’s project involved placing students and their lives and thoughts at the center of the classroom experience in order to give them a sense of themselves as writers (342), while Bartholomae saw such emphasis on students’ individual expression as a “sleight of hand” that elides the power of the teacher and the degree to which all writing is a product of culture, history, and textual interaction (qtd. in Sumpter 342). For Sumpter, Bartholomae’s approach, which he sees as common in the composition classroom, generates a teacher-centered pedagogy (342-43).

Sumpter points to ways in which current uses of these two approaches merge to create “a more flexible version of each philosophy” (341). By incorporating and valuing diverse student voices, expressivism gains a critical, socially aware component, while social constructionists exploit the de-emphasis on the genius of the individual author to welcome voices that are often marginalized and to increase student confidence in themselves as writers (344). Yet, Sumpter argues, attention to the differences in these two philosophies enables the implications of each to be explored more fully (344).

Sumpter presents a history of the relationship between creative writing and composition, beginning in the late 19th and early 20th centuries, when, according to D. G. Myers, there was no distinction between the two (cited in Sumpter 345). The next part of the 20th century saw a increasing emphasis on “efficiency,” which led writing classes to a focus on “practical activities” (Myers, qtd. in Sumpter 345). Creative writing, meanwhile, allied itself with New Criticism, “melding dual impulses—writing and literature, expression and ideas, art and social practice” (345). This liaison, Sumpter writes, gave way fairly quickly after World War II to a new role for universities as they tried to assert themselves as a “haven for the arts” (Myers, qtd. in Sumpter 346), leading to a rupture between creative writing and criticism (346).

Sumpter states that this rupture, establishing as it did that creative writing was “something different from an academic discipline” (Tim Mayers, qtd. in Sumpter 346; emphasis original), coincided with composition’s development as an academic field. As composition studies continued to evolve theoretically, according to Sumpter, creative writing pedagogy retreated into “lore,” disappearing from discussions of the history of writing instruction like those of Gerald Graff and James Berlin (347).

Sumpter references moves during the latter decades of the 20th century to question the divorce between the two fields, but posits the need to examine creative-writing pedagogy more carefully in order to assess such moves. He focuses in particular on criticism of the workshop model, which scholars such as Patrick Bizarro and Michael McClanahan and Kelly Ritter characterize as built around a dominating teacher who imposes conformity on student writers (348). Moreover, according to Sumpter, the pursuit of consensus in the workshop model “will reflect a dominant ideology” (348) that excludes many students’ unique or marginalized voices and experiences (349). In Sumpter’s view, theory like that informing composition studies can disrupt these negative practices (349).

Sumpter examines a number of scholarly proposals for bridging the gap between creative writing and composition. Some adjust pedagogy in small ways to integrate expressivism and social-contructionism (353-54). Others more aggressively redesign pedagogy: for example, Tim Mayers proposes a course built around “craft criticism,” which he says can meld creative writing with “sociopolitical understandings of literacy” to locate it in “a more general intellectual framework concerning literacy itself” (qtd. in Sumpter 354). Wendy Bishop’s “transactional workshop” includes “strong components of exploratory and instrumental writing” as well as self-reflection to introduce theory while retaining students as the pedagogical center (qtd. in Sumpter 355).

Other models revise workshop design: for example, Hal Blythe and Charlie Sweet have students respond to each others’ work in small groups, meeting with an instructor only occasionally to diminish the dominance of the teacher (355). Sumpter discusses other models that ask composition to encourage risk-taking, originality, and experimentation (357).

Sumpter expresses concern that some models, such as Mayers’s, ultimately fail to put expressivism on equal footing with social constructionism (354) and that efforts to inject social-constructionism into creative writing courses can impose “certain pedagogical traits that just about every theorist of creative writing pedagogy wants to avoid,” such as increased teacher dominance (353). His solution is a two-course curriculum in which the two courses are taught separately, though coordinated, for example, by theme (358) and each infused with aspects of its counterpart (351, 359).

He grounds this proposal in claims that what creative writing offers is sufficiently different and valuable that it deserves its own focus and that, if simply added to composition classes, will always risk being eclipsed by the theoretical and analytical components (350-352). He addresses the institutional burden of staffing this extra course by adapting Bythe and Sweet’s model, in which most of the feedback burden is taken on by students in small groups and the instructor’s role is minimized. In such a model, he argues, current faculty and graduate instructors can take on an additional course assignments without substantially increasing work load (358-59).

The virtues of such a model, he contends, include allowing each course to focus on its own strengths while addressing its weaknesses and “formalizing” the equal value of creative writing in the academy. He believes that realizing these goals “will give students a deep, diverse exposure to the world of written discourse and their place in it” (359).