College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Lindenman et al. (Dis)Connects between Reflection and Revision. CCC, June 2018. Posted 07/22/2018.

Lindenman, Heather, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch. “Revision and Reflection: A Study of (Dis)Connections between Writing Knowledge and Writing Practice.” College Composition and Communication 69.4 (2018): 581-611. Print.

Heather Lindenman, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch report a “large-scale, qualitative assessment” (583) of students’ responses to an assignment pairing reflection and revision in order to evaluate the degree to which reflection and revision inform each other in students’ writing processes.

The authors cite scholarship designating reflection and revision “threshold concepts important to effective writing” (582). Scholarship suggests that reflection should encourage better revision because it “prompts metacognition,” defined as “knowledge of one’s own thinking processes and choices” (582). Lindenman et al. note the difficulties faced by teachers who recognize the importance of revision but struggle to overcome students’ reluctance to revise beyond surface-level correction (582). The authors conclude that engagement with the reflective requirements of the assignment did not guarantee effective revision (584).

The study team consisted of six English 101 instructors and four writing program administrators (587). The program had created a final English 101 “Revision and Reflection Assignment” in which students could draw on shorter memos on the four “linked essays” they wrote for the class. These “reflection-in-action” memos, using the terminology of Kathleen Blake Yancey, informed the final assignment, which asked for a “reflection-in-presentation”: students could choose one of their earlier papers for a final revision and write an extended reflection piece discussing their revision decisions (585).

The team collected clean copies of this final assignment from twenty 101 sections taught by fifteen instructors. A random sample across the sections resulted in a study size of 152 papers (586). Microsoft Word’s “compare document” feature allowed the team to examine students’ actual revisions.

In order to assess the materials, the team created a rubric judging the revisions as either “substantive, moderate, or editorial.” A second rubric allowed them to classify the reflections as “excellent, adequate, or inadequate” (586). Using a grounded-theory approach, the team developed forty codes to describe the reflective pieces (587). The study goal was to determine how well students’ accounts of their revisions matched the revisions they actually made (588).

The article includes the complete Revision and Reflection Assignment as well as a table reporting the assessment results; other data are available online (587). The assignment called for specific features in the reflection, which the authors characterize as “narrating progress, engaging teacher commentary, and making self-directed choices” (584).

The authors report that 28% of samples demonstrated substantive revision, while 44% showed moderate revision and 28% editorial revision. The reflection portion of the assignment garnered 19% excellent responses, 55% that were adequate, and 26% that were inadequate (587).

The “Narrative of Progress” invites students to explore the skills and concepts they feel they have incorporated into their writing process over the course of the semester. Lindenman et al. note that such narratives have been critiqued for inviting students to write “ingratiat[ing]” responses that they think teachers want to hear as well as for encouraging students to emphasize “personal growth” rather than a deeper understanding of rhetorical possibilities (588).

They include an example of a student who wrote about his struggles to develop stronger theses and who, in fact, showed considerable effort to address this issue in his revision, as well as an example of a student who wrote about “her now capacious understanding of revision in her memo” but whose “revised essay does not carry out or enact this understanding” (591). The authors report finding “many instances” where students made such strong claims but did not produce revisions that “actualiz[ed] their assertions” 591. Lindenman et al. propose that such students may have increased in their awareness of concepts, but that this awareness “was not enough to help them translate their new knowledge into practice within the context of their revisions” (592).

The section of student response to teacher commentary distinguishes between students for whom teachers’ comments served as “a heuristic” that allowed the student to take on roles as “agents” and the “majority” of students, who saw the comments as “a set of directions to follow” (592). Students who made substantive revisions, according to the authors, were able to identify issues called up the teacher feedback and respond to these concerns in the light of their own goals (594). While students who made “editorial” changes actually mentioned teacher comments more often (595), the authors point to shifts to first person in the reflective memos paired with visible revisions as an indication of student ownership of the process (593).

Analysis of “self-directed metacognitive practice” similarly found that students whose strong reflective statements were supported by actual revision showed evidence of “reach[ing] beyond advice offered by teachers or peers” (598). The authors note that, in contrast, “[a]nother common issue among self-directed, nonsubstantive revisers” was the expenditure of energy in the reflections to “convince their instructors that the editorial changes they made throughout their essays were actually significant” (600; emphasis original).

Lindenman et al. posit that semester progress-narratives may be “too abstracted from the actual practice of revision” and recommend that students receive “intentional instruction” to help them see how revision and reflection inform each other (601). They report changes to their assignment to foreground “the why of revision over the what” (602; emphasis original), and to provide students with a visual means of seeing their actual work via “track changes” or “compare documents” while a revision is still in progress (602).

A third change encourages more attention to the interplay between reflection and revision; the authors propose a “hybrid threshold concept: reflective revision” (604; emphasis original).

The authors find their results applicable to portfolio grading, in which, following the advice of Edward M. White, teachers are often encouraged to give more weight to the reflections than to the actual texts of the papers. The authors argue that only by examining the two components “in light of each other” can teachers and scholars fully understand the role that reflection can play in the development of metacognitive awareness in writing (604; emphasis original).

 


Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Comer and White. MOOC Assessment. CCC, Feb. 2016. Posted 04/18/2016.

Comer, Denise K., and Edward M. White. “Adventuring into MOOC Writing Assessment: Challenges, Results, and Possibilities.” College Composition and Communication 67.3 (2016): 318-59. Print.

Denise K. Comer and Edward M. White explore assessment in the “first-ever first-year-writing MOOC,” English Composition I: Achieving Expertise, developed under the auspices of the Bill & Melinda Gates Foundation, Duke University, and Coursera (320). Working with “a team of more than twenty people” with expertise in many areas of literacy and online education, Comer taught the course (321), which enrolled more than 82,000 students, 1,289 of whom received a Statement of Accomplishment indicating a grade of 70% or higher. Nearly 80% of the students “lived outside the United States” and for a majority, English was not the first language, although 59% of these said they were “proficient or fluent in written English” (320). Sixty-six percent had bachelor’s or master’s degrees.

White designed and conducted the assessment, which addressed concerns about MOOCs as educational options. The authors recognize MOOCs as “antithetical” (319) to many accepted principles in writing theory and pedagogy, such as the importance of interpersonal instructor/student interaction (319), the imperative to meet the needs of a “local context” (Brian Huot, qtd. in Comer and White 325) and a foundation in disciplinary principles (325). Yet the authors contend that as “MOOCs are persisting,” refusing to address their implications will undermine the ability of writing studies specialists to influence practices such as Automated Essay Scoring, which has already been attempted in four MOOCs (319). Designing a valid assessment, the authors state, will allow composition scholars to determine how MOOCs affect pedagogy and learning (320) and from those findings to understand more fully what MOOCs can accomplish across diverse populations and settings (321).

Comer and White stress that assessment processes extant in traditional composition contexts can contribute to a “hybrid form” applicable to the characteristics of a MOOC (324) such as the “scale” of the project and the “wide heterogeneity of learners” (324). Models for assessment in traditional environments as well as online contexts had to be combined with new approaches that addressed the “lack of direct teacher feedback and evaluation and limited accountability for peer feedback” (324).

For Comer and White, this hybrid approach must accommodate the degree to which the course combined the features of an “xMOOC” governed by a traditional academic course design with those of a “cMOOC,” in which learning occurs across “network[s]” through “connections” largely of the learners’ creation (322-23).

Learning objectives and assignments mirrored those familiar to compositionists, such as the ability to “[a]rgue and support a position” and “[i]dentify and use the stages of the writing process” (323). Students completed four major projects, the first three incorporating drafting, feedback, and revision (324). Instructional videos and optional workshops in Google Hangouts supported assignments like discussion forum participation, informal contributions, self-reflection, and peer feedback (323).

The assessment itself, designed to shed light on how best to assess such contexts, consisted of “peer feedback and evaluation,” “Self-reflection,” three surveys, and “Intensive Portfolio Rating” (325-26).

The course supported both formative and evaluative peer feedback through “highly structured rubrics” and extensive modeling (326). Students who had submitted drafts each received responses from three other students, and those who submitted final drafts received evaluations from four peers on a 1-6 scale (327). The authors argue that despite the level of support peer review requires, it is preferable to more expert-driven or automated responses because they believe that

what student writers need and desire above all else is a respectful reader who will attend to their writing with care and respond to it with understanding of its aims. (327)

They found that the formative review, although taken seriously by many students, was “uneven,” and students varied in their appreciation of the process (327-29). Meanwhile, the authors interpret the evaluative peer review as indicating that “student writing overall was successful” (330). Peer grades closely matched those of the expert graders, and, while marginally higher, were not inappropriately high (330).

The MOOC provided many opportunities for self-reflection, which the authors denote as “one of the richest growth areas” (332). They provide examples of student responses to these opportunities as evidence of committed engagement with the course; a strong desire for improvement; an appreciation of the value of both receiving and giving feedback; and awareness of opportunities for growth (332-35). More than 1400 students turned in “final reflective essays” (335).

Self-efficacy measures revealed that students exhibited an unexpectedly high level of confidence in many areas, such as “their abilities to draft, revise, edit, read critically, and summarize” (337). Somewhat lower confidence levels in their ability to give and receive feedback persuade the authors that a MOOC emphasizing peer interaction served as an “occasion to hone these skills” (337). The greatest gain occurred in this domain.

Nine “professional writing instructors” (339) assessed portfolios for 247 students who had both completed the course and opted into the IRB component (340). This assessment confirmed that while students might not be able to “rely consistently” on formative peer review, peer evaluation could effectively supplement expert grading (344).

Comer and White stress the importance of further research in a range of areas, including how best to support effective peer response; how ESL writers interact with MOOCs; what kinds of people choose MOOCs and why; and how MOOCs might function in WAC/WID situations (344-45).

The authors stress the importance of avoiding “extreme concluding statements” about the effectiveness of MOOCs based on findings such as theirs (346). Their study suggests that different learners valued the experience differently; those who found it useful did so for varied reasons. Repeating that writing studies must take responsibility for assessment in such contexts, they emphasize that “MOOCs cannot and should not replace face-to-face instruction” (346; emphasis original). However, they contend that even enrollees who interacted briefly with the MOOC left with an exposure to writing practices they would not have gained otherwise and that the students who completed the MOOC satisfactorily amounted to more students than Comer would have reached in 53 years teaching her regular FY sessions (346).

In designing assessments, the authors urge, compositionists should resist the impulse to focus solely on the “Big Data” produced by assessments at such scales (347-48). Such a focus can obscure the importance of individual learners who, they note, “bring their own priorities, objectives, and interests to the writing MOOC” (348). They advocate making assessment an activity for the learners as much as possible through self-reflection and through peer interaction, which, when effectively supported, “is almost as useful to students as expert response and is crucial to student learning” (349). Ultimately, while the MOOC did not succeed universally, it offered many students valuable writing experiences (346).