College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Lindenman et al. (Dis)Connects between Reflection and Revision. CCC, June 2018. Posted 07/22/2018.

Lindenman, Heather, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch. “Revision and Reflection: A Study of (Dis)Connections between Writing Knowledge and Writing Practice.” College Composition and Communication 69.4 (2018): 581-611. Print.

Heather Lindenman, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch report a “large-scale, qualitative assessment” (583) of students’ responses to an assignment pairing reflection and revision in order to evaluate the degree to which reflection and revision inform each other in students’ writing processes.

The authors cite scholarship designating reflection and revision “threshold concepts important to effective writing” (582). Scholarship suggests that reflection should encourage better revision because it “prompts metacognition,” defined as “knowledge of one’s own thinking processes and choices” (582). Lindenman et al. note the difficulties faced by teachers who recognize the importance of revision but struggle to overcome students’ reluctance to revise beyond surface-level correction (582). The authors conclude that engagement with the reflective requirements of the assignment did not guarantee effective revision (584).

The study team consisted of six English 101 instructors and four writing program administrators (587). The program had created a final English 101 “Revision and Reflection Assignment” in which students could draw on shorter memos on the four “linked essays” they wrote for the class. These “reflection-in-action” memos, using the terminology of Kathleen Blake Yancey, informed the final assignment, which asked for a “reflection-in-presentation”: students could choose one of their earlier papers for a final revision and write an extended reflection piece discussing their revision decisions (585).

The team collected clean copies of this final assignment from twenty 101 sections taught by fifteen instructors. A random sample across the sections resulted in a study size of 152 papers (586). Microsoft Word’s “compare document” feature allowed the team to examine students’ actual revisions.

In order to assess the materials, the team created a rubric judging the revisions as either “substantive, moderate, or editorial.” A second rubric allowed them to classify the reflections as “excellent, adequate, or inadequate” (586). Using a grounded-theory approach, the team developed forty codes to describe the reflective pieces (587). The study goal was to determine how well students’ accounts of their revisions matched the revisions they actually made (588).

The article includes the complete Revision and Reflection Assignment as well as a table reporting the assessment results; other data are available online (587). The assignment called for specific features in the reflection, which the authors characterize as “narrating progress, engaging teacher commentary, and making self-directed choices” (584).

The authors report that 28% of samples demonstrated substantive revision, while 44% showed moderate revision and 28% editorial revision. The reflection portion of the assignment garnered 19% excellent responses, 55% that were adequate, and 26% that were inadequate (587).

The “Narrative of Progress” invites students to explore the skills and concepts they feel they have incorporated into their writing process over the course of the semester. Lindenman et al. note that such narratives have been critiqued for inviting students to write “ingratiat[ing]” responses that they think teachers want to hear as well as for encouraging students to emphasize “personal growth” rather than a deeper understanding of rhetorical possibilities (588).

They include an example of a student who wrote about his struggles to develop stronger theses and who, in fact, showed considerable effort to address this issue in his revision, as well as an example of a student who wrote about “her now capacious understanding of revision in her memo” but whose “revised essay does not carry out or enact this understanding” (591). The authors report finding “many instances” where students made such strong claims but did not produce revisions that “actualiz[ed] their assertions” 591. Lindenman et al. propose that such students may have increased in their awareness of concepts, but that this awareness “was not enough to help them translate their new knowledge into practice within the context of their revisions” (592).

The section of student response to teacher commentary distinguishes between students for whom teachers’ comments served as “a heuristic” that allowed the student to take on roles as “agents” and the “majority” of students, who saw the comments as “a set of directions to follow” (592). Students who made substantive revisions, according to the authors, were able to identify issues called up the teacher feedback and respond to these concerns in the light of their own goals (594). While students who made “editorial” changes actually mentioned teacher comments more often (595), the authors point to shifts to first person in the reflective memos paired with visible revisions as an indication of student ownership of the process (593).

Analysis of “self-directed metacognitive practice” similarly found that students whose strong reflective statements were supported by actual revision showed evidence of “reach[ing] beyond advice offered by teachers or peers” (598). The authors note that, in contrast, “[a]nother common issue among self-directed, nonsubstantive revisers” was the expenditure of energy in the reflections to “convince their instructors that the editorial changes they made throughout their essays were actually significant” (600; emphasis original).

Lindenman et al. posit that semester progress-narratives may be “too abstracted from the actual practice of revision” and recommend that students receive “intentional instruction” to help them see how revision and reflection inform each other (601). They report changes to their assignment to foreground “the why of revision over the what” (602; emphasis original), and to provide students with a visual means of seeing their actual work via “track changes” or “compare documents” while a revision is still in progress (602).

A third change encourages more attention to the interplay between reflection and revision; the authors propose a “hybrid threshold concept: reflective revision” (604; emphasis original).

The authors find their results applicable to portfolio grading, in which, following the advice of Edward M. White, teachers are often encouraged to give more weight to the reflections than to the actual texts of the papers. The authors argue that only by examining the two components “in light of each other” can teachers and scholars fully understand the role that reflection can play in the development of metacognitive awareness in writing (604; emphasis original).

 


Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).


1 Comment

T. Bourelle et al. Using Instructional Assistants in Online Classes. C&C, Sept. 2015. Posted 10/13/2015.

Bourelle, Tiffany, Andrew Bourelle, and Sherry Rankins-Robertson. “Teaching with Instructional Assistants: Enhancing Student Learning in Online Classes.” Computers and Composition 37 (2015): 90-103. Web. 6 Oct. 2015.

Tiffany Bourelle, Andrew Bourelle, and Sherry Rankins-Robertson discuss the “Writers’ Studio,” a pilot program at Arizona State University that utilized upper-level English and education majors as “instructional assistants” (IAs) in online first-year writing classes. The program was initiated in response to a request from the provost to cut budgets without affecting student learning or increasing faculty workload (90).

A solution was an “increased student-to-teacher ratio” (90). To ensure that the creation of larger sections met the goal of maintaining teacher workloads and respected the guiding principles put forward by the Conference on College Composition and Communication Committee for Best Practices in Online Writing Instruction in its March 2013 Position Statement, the team of faculty charged with developing the cost-saving measures supplemented “existing pedagogical strategies” with several innovations (91).

The writers note that one available cost-saving step was to avoid staffing underenrolled sections. To meet this goal, the team created “mega-sections” in which one teacher was assigned per each 96 students, the equivalent of a full-time load. Once the enrollment reached 96, a second teacher was assigned to the section, and the two teachers team-taught. T. Bourelle et al. give the example of a section of the second semester of the first-year sequence that enrolled at 120 students and was taught by two instructors. These 120 students were assigned to 15-student subsections (91).

T. Bourelle et al. note several reasons why the new structure potentially increased faculty workload. They cite research by David Reinheimer to the effect that teaching writing online is inherently more time-intensive than instructors may expect (91). Second, the planned curriculum included more drafts of each paper, requiring more feedback. In addition, the course design required multimodal projects. Finally, students also composed “metacognitive reflections” to gauge their own learning on each project (92).

These factors prompted the inclusion of the IAs. One IA was assigned to each 15-student group. These upper-level students contributed to the feedback process. First-year students wrote four drafts of each paper: a rough draft that received peer feedback, a revised draft that received comments from the IAs, an “editing” draft students could complete using the writing center or online resources, and finally a submission to the instructor, who would respond by either accepting the draft for a portfolio or returning it with directions to “revise and resubmit” (92). Assigning portfolio grades fell to the instructor. The authors contend that “in online classes where students write multiple drafts for each project, instructor feedback on every draft is simply not possible with the number of students assigned to any teacher, no matter how she manages her time” (93).

T. Bourelle et al. provide extensive discussion of the ways the IAs prepared for their roles in the Writers’ Studio. A first component was an eight-hour orientation in which the assistants were introduced to important teaching practices and concepts, in particular the process of providing feedback. Various interactive exercises and discussions allowed the IAs to develop their abilities to respond to the multimodal projects required by the Studio, such as blogs, websites, or “sound portraits” (94). The instruction for IAs also covered the distinction between “directive” and “facilitative” feedback, with the latter designed to encourage “an author to make decisions and [give] the writer freedom to make choices” (94).

Continuing support throughout the semester included a “portfolio workshop” that enabled the IAs to guide students in their production of the culminating eportfolio requirement, which required methods of assessment unique to electronic texts (95). Bi-weekly meetings with the instructors of the larger sections to which their cohorts belonged also provided the IAs with the support needed to manage their own coursework while facilitating first-year students’ writing (95).

In addition, IAs enrolled in an online internship that functioned as a practicum comparable to practica taken by graduate teaching assistants at many institutions (95-97). The practicum for the Writers’ Studio internship reinforced work on providing facilitative feedback but especially incorporated the theory and practice of online instruction (96). T. Bourelle et al. argue that the effectiveness of the practicum experience was enhanced by the degree to which it “mirror[ed]” much of what the undergraduate students were experiencing in their first-year classes: “[B]oth groups of beginners are working within initially uncomfortable but ultimately developmentally positive levels of ambiguity, multiplicity, and open-endedness” (Barb Blakely Duffelmeyer, qtd. in T. Bourelle et al. 96). Still quoting Duffelmeyer, the authors contend that adding computers “both enriched and problematized” the pedagogical experience of the coursework for both groups (96), imposing the need for special attention to online environments.

Internship assignments also gave the IAs a sense of what their own students would be experiencing by requiring an eportfolio featuring what they considered their best examples of feedback to student writing as well as reflective papers documenting their learning (98).

The IAs in the practicum critiqued the first-year curriculum, for example suggesting stronger scaffolding for peer review and better timing of assignments. They wrote various instructional materials to support the first-year course activities (97).

Their contributions to the first-year course included “[f]aciliting discussion groups” (98) and “[d]eveloping supportive relationships with first-year writers” (100), but especially “[r]esponding to revised drafts” (99). T. Bourelle et al. note that the IAs’ feedback differed from that of peer reviewers in that the IAs had acquired background in composition and rhetorical theory; unlike writing-center tutors, the IAs were more versed in the philosophy and expectations embedded in the course itself (99). IAs were particularly helpful to students who had misread the assignments, and they were able to identify and mentor students who were falling behind (98, 99).

The authors respond to the critique that the IAs represented uncompensated labor by arguing that the Writers’ Studio offered a pedagogically valuable opportunity that would serve the students well if they pursued graduate or professional careers as educators, emphasizing the importance of designing such programs to benefit the students as well as the university (101). They present student and faculty testimony on the effectiveness of the IAs as a means of “supplement[ing] teacher interaction” rather than replacing it (102). While they characterize the “monetary benefit” to the university as “small” (101), they consider the project “successful” and urge other “teacher-scholars to build on what we have tried to do” (102).


Cox, Black, Heney, and Keith. Responding to Students Online. TETYC, May 2015. Posted 07/22/15.

Cox, Stephanie, Jennifer Black, Jill Heney, and Melissa Keith. “Promoting Teacher Presence: Strategies for Effective and Efficient Feedback to Student Writing Online.” Teaching English in the Two-Year College 42.4 (2015): 376-91. Web. 14 July 2015.

Stephanie Cox, Jennifer Black, Jill Heney, and Melissa Keith address the challenges of responding to student writing online. They note the special circumstances attendant on online teaching, in which students lack the cues provided by body language and verbal tone when they interpret instructor comments (376). Students in online sections, the authors write, do not have easy access to clarification and individual direction, and may not always take the initiative in following up when their needs aren’t met (377). These features of the online learning environment require teachers to develop communicative skills especially designed for online teaching.

To overcome the difficulty teachers may find in building a community among students with whom they do not interact face-to-face, the authors draw on the Community of Inquiry framework developed by D. Randy Garrison. This model emphasizes presence as a crucial rhetorical dimension in community building, distinguishing between “social presence,” “cognitive presence,” and “teacher presence” as components of a classroom in which teachers can create effective learning environments.

Social presence indicates the actions and rhetorical choices that give students a sense of “a real person online,” in the words of online specialists Rena M. Palloff and Keith Pratt (qtd. in Cox et al. 377). Moves that allow the teacher to interact socially through the response process decrease the potential for students to “experience isolation and a sense of disconnection” (377). Cognitive presence involves activities that contribute to the “creation of meaning” in the classroom as students explore concepts and ideas. both individually and as part of the community. Through teacher presence, instructors direct learning and disseminate knowledge, setting the stage for social and cognitive interaction (377).

In the authors’ view, developing effective social, cognitive, and teacher presence requires attention to the purpose of particular responses depending on the stage of the writing process, to the concrete elements of delivery, and to the effects of different choices on the instructor’s workload.

Citing Peter Elbow’s discussion of “ranking and evaluation,” the authors distinguish between feedback that assigns a number on a scale and feedback that encourages ongoing development of an idea or draft (376-79; emphasis original). Ranking during early stages may allow teachers to note completion of tasks; evaluation, conversely, involves “communication” that allows students to move forward fruitfully on a project (379).

The authors argue that instructors in digital environments should follow James E. Porter’s call for “resurrecting the neglected rhetorical canon of delivery” (379). Digital teaching materials provide opportunities like emoticons for emulating the role of the body that is important to classical theories of delivery; such tools can emphasize emotions that can be lost in online exchanges.

Finally, the authors note the tendency for responding online to grow into an overwhelming workload. “Limit[ing] their comments” is a “healthy” practice that teachers need not regret. Determining what kind of feedback is most appropriate to a given type of writing is important in setting these limits, as is making sure that students understand that different tasks will elicit different kinds of response (379-80).

The authors explore ways to address informal writing without becoming overwhelmed. They point out that teachers often don’t respond in writing to informal work in face-to-face classrooms and thus do not necessarily need to do so in online classes. They suggest that “generalized group comments” can effectively point out shared trends in students’ work, present examples, and enhance teacher presence. Such comments may be written, but can also be “audio” or “narrated screen capture” that both supply opportunities for generating social and teacher presence while advancing cognitive goals.

They recommend making individual comments on informal work publicly, posting only “one formative point per student while encouraging students to read all of the class postings and the instructor responses” (382). Students thus benefit from a broader range of instruction. Individual response is important early and in the middle of the course to create and reinforce students’ connections with the instructor; it is also important during the early development of paper ideas when some students may need “redirect[ion]” (382).

The authors also encourage “feedback-free spaces,” especially for tentative early drafting; often making such spaces visible to all students gives students a sense of audience while allowing them to share ideas and experience how the writing process often unfolds through examples of early writing “in all its imperfection” (383).

Cox et al. suggest that feedback on formal assignments should embrace Richard Straub’s “six conversational response strategies” (383), which focus on informal language, specific connections to the student’s work, and maintaining an emphasis on “help or guidance” (384). The authors discuss five response methods for formal tasks. In their view, rubrics work best when free of complicated technical language and when integrated into a larger conversation about the student’s writing (385-86). Cox et al. recommend using the available software programs for in-text comments, which students find more legible and which allow instructors to duplicate responses when appropriate (387). The authors particularly endorse “audio in-text comments,” which not only save time but also allow the students to hear the voice of an embodied person, enhancing presence (387). Similarly, they recommend generating holistic end-comments via audio, with a highlighting system to tie the comments back to specific moments in the student’s text (387-88). Synchronous conferences, facilitated by many software options including screen-capture tools, can replace face-to-face conferences, which may not work for online students. The opportunity to talk not only about writing but also about other aspects of the student’s environment further build social, cognitive, and teacher presence (388).

The authors offer tables delineating the benefits and limitations of responses both to informal and formal writing, indicating the kind of presence supported by each and options for effective delivery (384, 389).