College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Litterio, Lisa M. Contract Grading: A Case Study. J of Writing Assessment, 2016. Posted 04/20/2017.

Litterio, Lisa M. “Contract Grading in a Technical Writing Classroom: A Case Study.” Journal of Writing Assessment 9.2 (2016). Web. 05 Apr. 2017.

In an online issue of the Journal of Writing Assessment, Lisa M. Litterio, who characterizes herself as “a new instructor of technical writing,” discusses her experience implementing a contract grading system in a technical writing class at a state university in the northeast. Her “exploratory study” was intended to examine student attitudes toward the contract-grading process, with a particular focus on how the method affected their understanding of “quality” in technical documents.

Litterio’s research into contract grading suggests that it can have the effect of supporting a process approach to writing as students consider the elements that contribute to an “excellent” response to an assignment. Moreover, Litterio contends, because it creates a more democratic classroom environment and empowers students to take charge of their writing, contract grading also supports critical pedagogy in the Freirean model. Litterio draws on research to support the additional claim that contract grading “mimic[s] professional practices” in that “negotiating and renegotiating a document” as students do in contracting for grades is a practice that “extends beyond the classroom into a workplace environment.”

Much of the research she reports dates to the 1970s and 1980s, often reflecting work in speech communication, but she cites as well models from Ira Shor, Jane Danielewicz and Peter Elbow, and Asao Inoue from the 2000s. In a common model, students can negotiate the quantity of work that must be done to earn a particular grade, but the instructor retains the right to assess quality and to assign the final grade. Litterio depicts her own implementation as a departure from some of these models in that she did make the final assessment, but applied criteria devised collaboratively by the students; moreover, her study differs from earlier reports of contract grading in that it focuses on the students’ attitudes toward the process.

Her Fall 2014 course, which she characterizes as a service course, enrolled twenty juniors and seniors representing seven majors. Neither Litterio nor any of the students were familiar with contract grading, and no students withdrew on learning from the syllabus and class announcements of Litterio’s grading intentions. At mid-semester and again at the end of the course, Litterio administered an anonymous open-ended survey to document student responses. Adopting the role of “teacher-researcher,” Litterio hoped to learn whether involvement in the generation of criteria led students to a deeper awareness of the rhetorical nature of their projects, as well as to “more involvement in the grading process and more of an understanding of principles discussed in technical writing, such as usability and document design.”

Litterio shares the contract options, which allowed students to agree to produce a stated number of assignments of either “excellent,” “great,” or “good” quality, an “entirely positive grading schema” that draws on Frances Zak’s claim that positive evaluations improved student “authority over their writing.”

The criteria for each assignment were developed in class discussion through an open voting process that resulted in general, if not absolute, agreement. Litterio provides the class-generated criteria for a resumé, which included length, format, and the expectations of “specific and strong verbs.” As the instructor, Litterio ultimately decided whether these criteria were met.

Mid-semester surveys indicated that students were evenly split in their preferences for traditional grading models versus the contract-grading model being applied. At the end of the semester, 15 of the 20 students expressed a preference for traditional grading.

Litterio coded the survey responses and discovered specific areas of resistance. First, some students cited the unfamiliarity of the contract model, which made it harder for them to “track [their] own grades,” in one student’s words. Second, the students noted that the instructor’s role in applying the criteria did not differ appreciably from instructors’ traditional role as it retained the “bias and subjectivity” the students associated with a single person’s definition of terms like “strong language.” Students wrote that “[i]t doesn’t really make a difference in the end grade anyway, so it doesn’t push people to work harder,” and “it appears more like traditional grading where [the teacher] decide[s], not us.”

In addition, students resisted seeing themselves and their peers as qualified to generate valid criteria and to offer feedback on developing drafts. Students wrote of the desire for “more input from you vs. the class,” their sense that student-generated criteria were merely “cosmetics,” and their discomfort with “autonomy.” Litterio attributes this resistance to the role of expertise to students’ actual novice status as well as to the nature of the course, which required students to write for different discourse communities because of their differing majors. She suggests that contract grading may be more appropriate for writing courses within majors, in which students may be more familiar with the specific nature of writing in a particular discipline.

However, students did confirm that the process of generating criteria made them more aware of the elements involved in producing exemplary documents in the different genres. Incorporating student input into the assessment process, Litterio believes, allows instructors to be more reflective about the nature of assessment in general, including the risk of creating a “yes or no . . . dichotomy that did not allow for the discussions and subjectivity” involved in applying a criterion. Engaging students throughout the assessment process, she contends, provides them with more agency and more opportunity to understand how assessment works. Student comments reflect an appreciation of having a “voice.”

This study, Litterio contends, challenges the assumption that contract grading is necessarily “more egalitarian, positive, [and] student-centered.” The process can still strike students as biased and based entirely on the instructor’s perspective, she found. She argues that the reflection on the relationship between student and teacher roles enabled by contract grading can lead students to a deeper understanding of “collective norms and contexts of their actions as they enter into the professional world.”