College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Lindenman et al. (Dis)Connects between Reflection and Revision. CCC, June 2018. Posted 07/22/2018.

Lindenman, Heather, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch. “Revision and Reflection: A Study of (Dis)Connections between Writing Knowledge and Writing Practice.” College Composition and Communication 69.4 (2018): 581-611. Print.

Heather Lindenman, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch report a “large-scale, qualitative assessment” (583) of students’ responses to an assignment pairing reflection and revision in order to evaluate the degree to which reflection and revision inform each other in students’ writing processes.

The authors cite scholarship designating reflection and revision “threshold concepts important to effective writing” (582). Scholarship suggests that reflection should encourage better revision because it “prompts metacognition,” defined as “knowledge of one’s own thinking processes and choices” (582). Lindenman et al. note the difficulties faced by teachers who recognize the importance of revision but struggle to overcome students’ reluctance to revise beyond surface-level correction (582). The authors conclude that engagement with the reflective requirements of the assignment did not guarantee effective revision (584).

The study team consisted of six English 101 instructors and four writing program administrators (587). The program had created a final English 101 “Revision and Reflection Assignment” in which students could draw on shorter memos on the four “linked essays” they wrote for the class. These “reflection-in-action” memos, using the terminology of Kathleen Blake Yancey, informed the final assignment, which asked for a “reflection-in-presentation”: students could choose one of their earlier papers for a final revision and write an extended reflection piece discussing their revision decisions (585).

The team collected clean copies of this final assignment from twenty 101 sections taught by fifteen instructors. A random sample across the sections resulted in a study size of 152 papers (586). Microsoft Word’s “compare document” feature allowed the team to examine students’ actual revisions.

In order to assess the materials, the team created a rubric judging the revisions as either “substantive, moderate, or editorial.” A second rubric allowed them to classify the reflections as “excellent, adequate, or inadequate” (586). Using a grounded-theory approach, the team developed forty codes to describe the reflective pieces (587). The study goal was to determine how well students’ accounts of their revisions matched the revisions they actually made (588).

The article includes the complete Revision and Reflection Assignment as well as a table reporting the assessment results; other data are available online (587). The assignment called for specific features in the reflection, which the authors characterize as “narrating progress, engaging teacher commentary, and making self-directed choices” (584).

The authors report that 28% of samples demonstrated substantive revision, while 44% showed moderate revision and 28% editorial revision. The reflection portion of the assignment garnered 19% excellent responses, 55% that were adequate, and 26% that were inadequate (587).

The “Narrative of Progress” invites students to explore the skills and concepts they feel they have incorporated into their writing process over the course of the semester. Lindenman et al. note that such narratives have been critiqued for inviting students to write “ingratiat[ing]” responses that they think teachers want to hear as well as for encouraging students to emphasize “personal growth” rather than a deeper understanding of rhetorical possibilities (588).

They include an example of a student who wrote about his struggles to develop stronger theses and who, in fact, showed considerable effort to address this issue in his revision, as well as an example of a student who wrote about “her now capacious understanding of revision in her memo” but whose “revised essay does not carry out or enact this understanding” (591). The authors report finding “many instances” where students made such strong claims but did not produce revisions that “actualiz[ed] their assertions” 591. Lindenman et al. propose that such students may have increased in their awareness of concepts, but that this awareness “was not enough to help them translate their new knowledge into practice within the context of their revisions” (592).

The section of student response to teacher commentary distinguishes between students for whom teachers’ comments served as “a heuristic” that allowed the student to take on roles as “agents” and the “majority” of students, who saw the comments as “a set of directions to follow” (592). Students who made substantive revisions, according to the authors, were able to identify issues called up the teacher feedback and respond to these concerns in the light of their own goals (594). While students who made “editorial” changes actually mentioned teacher comments more often (595), the authors point to shifts to first person in the reflective memos paired with visible revisions as an indication of student ownership of the process (593).

Analysis of “self-directed metacognitive practice” similarly found that students whose strong reflective statements were supported by actual revision showed evidence of “reach[ing] beyond advice offered by teachers or peers” (598). The authors note that, in contrast, “[a]nother common issue among self-directed, nonsubstantive revisers” was the expenditure of energy in the reflections to “convince their instructors that the editorial changes they made throughout their essays were actually significant” (600; emphasis original).

Lindenman et al. posit that semester progress-narratives may be “too abstracted from the actual practice of revision” and recommend that students receive “intentional instruction” to help them see how revision and reflection inform each other (601). They report changes to their assignment to foreground “the why of revision over the what” (602; emphasis original), and to provide students with a visual means of seeing their actual work via “track changes” or “compare documents” while a revision is still in progress (602).

A third change encourages more attention to the interplay between reflection and revision; the authors propose a “hybrid threshold concept: reflective revision” (604; emphasis original).

The authors find their results applicable to portfolio grading, in which, following the advice of Edward M. White, teachers are often encouraged to give more weight to the reflections than to the actual texts of the papers. The authors argue that only by examining the two components “in light of each other” can teachers and scholars fully understand the role that reflection can play in the development of metacognitive awareness in writing (604; emphasis original).

 


Leave a comment

Kolln and Hancock. Histories of U. S. Grammar Instruction. English Teaching: Practice and Critique (NZ), 2005. Posted 04/22/2018.

Kolln, Martha, and Craig Hancock. “The Story of English Grammar in United States Schools.” English Teaching: Practice and Critique 4.3 (2005): 11-31. Web. 4 Mar. 2018.

Martha Kolln and Craig Hancock, publishing in English Teaching: Practice and Critique in 2005, respond in parallel essays to what they consider the devaluation of grammar teaching in United States schools and universities. English Teaching: Practice and Critique is a publication of Waikato University in New Zealand. The two essays trace historical developments in attitudes toward grammar education in U. S. English language curricula.

Kolln’s essay reports on a long history of uncertainty about teaching grammar in United States classrooms. Noting that confusion about the distinction between “grammar” and “usage” pervaded discussions since the beginning of the Twentieth Century, Kolln cities studies from 1906 and 1913 to illustrate the prevalence of doubts that the time needed to teach grammar was justified in light of the many other demands upon public-school educators (13).

Citing Richard Braddock, Richard Lloyd-Jones, and Lowell Schoer’s 1963 Research in Written Composition to note that “early research in composition and grammar was not highly developed” (13), Kolln argues that the early studies were flawed (14). A later effort to address grammar teaching, An Experience Curriculum in English, was advanced by a 1936 National Council of Teachers of English (NCTE) committee; this program, Kolln writes, “recommended that grammar be taught in connection with writing, rather than as an isolated unit of study” (14). She contends that the effort ultimately failed because teachers did not accept its focus on “functional grammar” in place of “the formal method [they] were used to” (14).

In Kolln’s history, the hiatus following this abortive project ended with the advent of structural linguistics in the 1950s. This new understanding of the workings of English grammar was originally received enthusiastically; Harold B. Allen’s 1958 Readings in Applied English Linguistics drew on nearly 100 articles, including many from NCTE (12). This movement also embraced Noam Chomsky’s 1957 Syntactic Structures; the NCTE convention in 1963 featured “twenty different sessions on language, . . . with 50 individual papers” under categories like “Semantics,” “Structural Linguistics for the Junior High School,” and “the Relationship of Grammar to Composition” (14-15).

Excitement over such “new grammar” (15), however, was soon “swept aside” (12). Kolln posits that Chomsky’s complex generative grammar, which was not meant as a teaching tool, did not adapt easily to the classroom (15). She traces several other influences supporting the continued rejection of grammar instruction. Braddock et al. in 1963 cited a study by Roland Harris containing “serious flaws,” according to two critics who subsequently reviewed it (16). This study led Braddock et al. to state that grammar instruction not only did not improve student writing, it led to “a harmful effect” (Braddock et al., qtd. in Kolln and Hancock 15). Kolln reports that this phrase is still referenced to argue against teaching grammar (15).

Other influences on attitudes toward grammar, for Kolln, include the advent of “student-centered” teaching after the Dartmouth seminar in 1966 , the ascendancy of the process movement, and a rejection of “elitist” judgments that denigrated students’ home languages (16-17). As a result of such influences and others, Kolln writes, “By 1980, the respected position that grammar had once occupied was no longer recognized by NCTE” (17).

Addressing other publications and position statements that echo this rejection of grammar instruction, Kolln writes that teacher education, in particular, has been impoverished by the loss of attention to the structure of language (19). She contends that “[t]he cost to English education of the NCTE anti-grammar policy is impossible to calculate” (19).

She sees shifts toward an understanding of grammar that distinguishes it from rote drill on correctness in the creation of an NCTE official assembly, The Assembly for the Teaching of English Grammar (ATEG). Several NCTE publications have forwarded the views of this group, including the book Grammar Alive! A Guide for Teachers, and articles in English Journal and Language Arts (20). Kolln urges that grammar, properly understood, be “seen as a legitimate part of the Language Arts curriculum that goes beyond an aid to writing” (20).

Hancock frames his discussion with a contemporaneous article by R. Hudson and J. Walmsley about trends in grammar instruction in the U.K. He sees a consensus among educators in England that “an informed understanding of language and an appropriate metalanguage with which to discuss it” are important elements of language education (qtd. in Kolln and Hancock 21). Further, this consensus endorses a rejection of “the older, dysfunctional, error-focused, Latin-based school grammar” (21-22).

In his view, the grounds for such widespread agreement in the United States, rather than encouraging an appreciation of well-designed grammar instruction, in fact lead away from the possibility of such an appreciation (22-23). He sees a U. S. consensus through the 1960s that literature, especially as seen through New Criticism, should be the principle business of English instruction. The emphasis on form, he writes, did not embrace linguistic theory; in general, grammar was “traditional” if addressed at all, and was seen as the responsibility of elementary schools (22). Literature was displaced by Critical Theory, which challenged the claim that “there is or should be a monolithic, central culture or a received wisdom” in the valuation of texts (22).

Similarly, he maintains that the advent of composition as a distinct field with its focus on “what writers actually do when they write” led to studies suggesting that experienced writers saw writing as meaning-making while inexperienced writers were found to, in Nancy Sommers’s words, “subordinate the demands of the specific problems of the text to the demands of the rules” (qtd. in Kolln and Hancock 23). Downplaying the rules, in this view, allowed students to engage more fully with the purposes of their writing.

In Hancock’s view, language educators in the U.S. distanced themselves from grammar instruction in their focus on “‘empowerment’ in writing” in order to address the needs of more diverse students (24). This need required a new acknowledgment of the varying contexts in which language occurred and an effort to value the many different forms language might take. Recognition of the damage done by reductive testing models also drove a retreat from a grammar defined as “policing people’s mistakes” (24-25).

Hancock argues that the public arena in which students tend to be judged does not allow either correctness or grammar to “simply be wished away” (25). He suggests that the “minimalist” theories of Constance Weaver in the 1990s and linguists like Steven Pinker are attempts to address the need for students to meet some kinds of standards, even though those standards are often poorly defined. These writers, in Hancock’s reading, contend that people learn their native grammars naturally and need little intervention to achieve their communicative goals (25, 27).

Hancock responds that a problem with this approach is that students who do not rise to the expected standard are blamed for their “failure to somehow soak it up from exposure or from the teacher’s non-technical remarks” (25). Hancock laments the “progressive diminution of knowledge” that results when so many teachers themselves are taught little about grammar (25): the lack of a “deep grounding in knowledge of the language” means that “[e]diting student writing becomes more a matter of what ‘feels right’” (26).

As a result of this history, he contends, “language-users” remain “largely unconscious of their own syntactic repertoire” (26), while teachers struggle with contradictory demands with so little background that, in Hancock’s view, “they are not even well-equipped to understand the nature of the problem” (29). He faults linguists as well for debunking prescriptive models while failing to provide “a practical alternative” (26).

Hancock presents a 2004 piece by Laura Micciche as a “counter-argument to minimalist approaches” (28). Hancock reads Micciche to say that there are more alternatives to the problems posed by grammatical instruction than outright rejection. He interprets her as arguing that a knowledge of language is “essential to formation of meaning” (28):

We need a discourse about grammar that does not retreat from the realities we face in the classroom—a discourse that takes seriously the connection between writing and thinking, the interwoven relationship between what we say and how we say it. (Micciche, qtd. in Kolln and Hancock 28)

Hancock deplores the “vacuum” created by the rejection of grammar instruction, a undefended space into which he feels prescriptive edicts are able to insert themselves (28, 29). Like Kolln, he points to ATEG, which in 2005-2006 was working to shift NCTE’s “official position against the teaching of formal grammar” (28). Hancock envisions grammar education that incorporates “all relevant linguistic grammars” and a “thoughtfully selected technical terminology” (28), as well as an understanding of the value of home languages as “the foundation for the evolution of a highly effective writing voice” (29). Such a grammar, he maintains, would be truly empowering, promoting an understanding of the “connection between formal choices and rhetorical effect” (26).

Click to access 2005v4n3art1.pdf

 


Leave a comment

Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Leave a comment

Moe, Peter Wayne. William Coles and “Themewriting” as Epideictic. CCC, Feb. 2018. Posted 03/02/2018.

Moe, Peter Wayne. “Reading Coles Reading Themes: Epideictic Rhetoric and the Teaching of Writing.” College Composition and Communication 69.3 (2018): 433-57. Print.

Peter Wayne Moe presents a reading of The Plural I: The Teaching of Writing by William E. Coles, Jr. Published in 1978, The Plural I narrates a course Coles taught in the fall of 1965-66 at the Case Institute of Technology (434). In Moe’s view, Coles’s course and his representation of it illuminate the relationship between writing pedagogy and epideictic rhetoric.

Moe notes that reviewers of Coles’s book found it counter to “the dominant traditions and pedagogies shaping composition” and thus “hard to read, hard to place, hard to value” (434). Moe hopes to “recover, and find value in” Coles’s contribution to the field (435).

Moe explores scholarly definitions and judgments of epideictic, many of which denigrate this rhetoric as superficial stylistic display that reinforces a community’s received values and therefore stifles critical inquiry (436). Moe contrasts it with “pragmatic” rhetorics that result in actions, like rhetorics of the “courtroom or senate” (435). He cites scholarship arguing that the role of the audience in the epideictic is not to act or “be persuaded; rather, the audience observes” (438). In doing so, an audience participates in epideictic as often defined: as bestowing “praise and blame” (438).

Scholars cited by Moe note that the “display” characterizing epideictic lays out “the shared values of a community”; etymologically, Moe shows, the term means “showing forth”; it is the rhetoric of “making known” (436). Moe argues that in performing these functions, epideictic becomes “the foundation from which a rhetor can praise and blame” (436). He contrasts the view that this showing forth sustains shared values with the contention that, in fact, epideictic can “reshape shared values,” and he argues that this reshaping is what Coles achieves in his use of this form in his writing classroom (437).

Moe cites Dale L. Sullivan to present education as fundamentally epideictic because it works to teach reasoning skills fitting particular contexts and “to instill in the student sentiments or emotions appropriate within the orthodoxy which the teacher represents” (Sullivan, qtd. in Moe 437). However, in Moe’s reading, Coles did not represent orthodoxy but instead pushed against it, using “little more than [the] praise and blame [of] student writing” to generate “sustained inquiry” capable of critically resisting banality and conformity (438).

Moe writes that The Plural I tracks the weekly assignments of a required first-year composition course, Humanities I (434). The chapters consist of these thirty assignments, several student papers mimeographed for discussion (ninety-four in all), and Coles’s account of each week’s classroom discussion (439). There was no textbook. According to Moe, “Coles dramatizes the classroom conversation; he does not transcribe.” Coles insisted that in these narratives nothing was made up (439).

Tracing Coles’s lessons through selected examples, Moe writes that Coles began by assigning an essay asking students to differentiate between amateurism and professionalism. The resulting essays, Coles declaimed, were “[t]riumphs of self-obliteration, . . . put-up jobs everyone of them, and as much of a bore to read as they must have been to write” (qtd. in Moe 440). In Coles’s view, these efforts represented what he called “Themewriting,” in which students displayed their understanding of what a teacher expected them to sound like (440).

Moe argues that this rhetorical choice represents students’ conception of the “shared values of this community, this classroom, and this teacher” (440), in which they draw on familiar patterns and commonplaces, believing that the community honors writing that, in Coles’s words, is “well-organized. It’s Clear, Logical, and Coherent. It’s neat” (qtd. in Moe 441). Coles asks questions that push students to challenge the voice of the Themewritten essays, ultimately creating consensus that “no one talks the way this paper sounds” (441). Moe depicts Cole creating a game of Themewriting in which students discover their ability to convert any set of terms‑for example, “man, black, and TNT” (442)— into a formulaic set of moves that are both “inevitable” and “moralistic” (443).

Coles’s project, Moe contends, is to push students to think about what they are doing with language when they act on these assumptions about “what makes good writing” by undermining their confidence in these apparently sacrosanct shared values (443). Among Coles’s stated intentions is the development of a “common vocabulary” (qtd. in Moe 443) that will provide new ways to characterize writing (443). Developing this vocabulary, Moe argues, “serves an epideictic function, uniting the class in their practice of praise and blame” (443).

As part of this vocabulary production, Coles encourages the adoption of metaphors like “sky-writing” or “mayonnaise” to capture the characteristics the class assigned to Themewriting (444). Among these metaphors are the names such as Steve, or Suzie, a “character who ‘isn’t a character at all’ because she is composed solely of clichés” (Coles, qtd. in Moe 445). Coles finds, however, that students fall back too glibly on these critical terms, using them to avoid grappling with stylistic nuances that suggest deeper struggles with language (446).

As the class nears its end, Moe contends that students discover that “avoiding the rhetoric of cant” is nearly impossible, and that articulating “‘another way of talking’” has been the difficult goal of Coles’s method (Coles, qtd. in Moe 447). Their loss of confidence in Themewriting and the challenges of finding a new understanding of what language can do upset students and left them feeling as if, in Coles’s words, “‘readiness with’ a certain kind of language is the same thing as a ‘loss of words’” (qtd. in Moe 448). However, Moe points out that students begin to notice how they manipulate language to create “a stylistic self” (449):

The “self construable from the way words fall on a page” is integral to Coles’s teaching. He clarifies that such a self is “not a mock or false self. . . .” The assignment sequence in The Plural I seeks to bring students to an awareness of how language constitutes this stylistic self and how one might use language in light of that awareness. (439)

Moe argues that writing teachers read student work as epideictic, reading it against the shared values of a community, not so much to be persuaded by arguments as to respond to the writer’s display of his or her use of language to create a particular stylistic self. He states that “persuasion, if it does occur, is a product of display—how well the student shows forth the various conventions of the discourses he or she hopes to enter” (451). This display is the ground on which persuasion “and other rhetorical acts” can take place (451). He argues that the value in Coles’s pedagogy is that he impels students to understand more precisely what they are doing when they partake in this display. Once they have recognized the shared values of the community, they become capable of “resisting them, rewriting them even, through praise and blame” (452).


Leave a comment

Kraemer, Don J. Ethics, Morality, and Justice. CCC, June 2017. Posted 07/16/2017.

Kraemer, Don J. “The Good, the Right, and the Decent: Ethical Dispositions, the Moral Viewpoint, and Just Pedagogy.” College Composition and Communication 68.4 (2017): 603-28. Print.

Don J. Kraemer argues that scholars in composition studies conflate the terms “ethical” and “moral.” He contends that distinguishing between these concepts through examining the ethical-moral interface as ‘a topic” (607; emphasis original) can provide a heuristic opportunity that can enhance compositionists’ efforts to work with diverse student views and values.

A starting point for Kraemer is Joseph Harris’s 2015 article, “Reasoning at the Point of a Gun,” in which Harris records discussion with grad students about a first-year student writing in opposition to gun control (603-04). Kraemer reports that Harris’s concerns included both urging the student “to inhabit, at least for a moment, a point of view you disagree with” and, at the same time, “find[ing] a way to help him develop the argument he wants to make” (qtd. in Kraemer 605, 604).

Kraemer presents these goals as representing the confrontation between the moral and the ethical. He also quotes Patricia Bizzell’s 2009 “Composition Studies Saves the World!”, maintaining that her reference to her “personal morality” (qtd. in Kraemer 605) actually describes “an ethics” (604-05).

To explore the distinctions between these concepts, Kraemer draws on a “kantian” approach in which, “ethically, we evaluate our actions in terms of the good, morally in terms of the right or obligatory” (606; emphasis original). He argues that we all belong to varied communities that may or may not share the same range of values or goods, that values can conflict even for individuals, and that these conflicts become “moral conflicts” in that we use moral reasoning to assess and judge them (605-06).

A further distinction Kraemer invokes to illuminate the moral-ethical interface is the difference between “what one is to be” and “what one is to do” (James Porter, qtd. in Kraemer 606-07). Kraemer categorizes questions about the kind of person an individual would like to be as ethical in that they deal with individual aspirations and values, the individual’s “good,” while questions about actions are questions about “what is the right thing to do,” that is, “the right thing for one, for anyone to do” (607) and therefore moral; emphasis original). For Kraemer, what individuals aspire to may or may not accord with the universal right thing supplied by morality (607).

Kraemer argues that when morality and ethics confront each other, as they must, we use morality to assess and reason about our ethical choices. In this process, the ethical good, which may accrue to groups and communities as well as individuals and which may be specific to particular circumstances, is not overridden by the moral, universal judgment but is taken into account. When, in Kant’s words, “human morality” and “human happiness” come together in “union and harmony,” the result is the “highest possible good in the world” (qtd. in Kraemer 607). “This,” Kraemer writes, “is the just” (607).

An important component of the just in Kraemer’s formulation is that it takes into account what doing the right thing will cost the individual actor or the community in which a particular version of the good is invoked. The heuristic value of the moral-ethical distinction, in this view, is that it sustains the “inventive tension” (615) between what we owe others (the moral) and what we see as important to achieve, to succeed at (the ethical) (611).

This view of ethics provides Kraemer with the argument that an ethically directed writer might value the rewards, both tangible and psychic, of doing a particular kind of writing well, even if that kind of writing does not commit the individual to making the highest use of his time by acting specifically to benefit others (610, 619); in fact, an individual’s practice of the good as she sees it in her writing may “may add to a reader’s labors, if not also offend that person, or worse” (615). Yet morality does not disappear; it involves the question “as to who benefits and who bears the cost” of an individual or group’s ethical choices (611). When these two kinds of stances “face each other,” we approach “the just” (611).

Kraemer develops his argument through a reading of John Duffy’s “Ethical Dispositions: A Discourse for Rhetoric and Composition.” Bringing this text into conversation with Aristotle’s Nicomachean Ethics and Chaïm Perelman and Lucie Olbrechts-Tyteca’s The New Rhetoric: A Treatise on Argumentation, Kraemer traces what he sees as Duffy’s movement between the ethical and the moral, arguing that keeping these terms separate allows a more fruitful understanding of the dilemma faced by writing teachers as they work to support students’ individual goals while also fostering a set of dispositions claimed by rhetoric and composition as foundational to the field’s mission.

For example, Kraemer examines Duffy’s statement that asking students to respond to counterarguments in their texts fosters “the dispositions of tolerance, generosity, and self-awareness” (qtd.. in Kraemer 616). For Kraemer, this exhortation to students “seems unnecessarily unilateral” (616). If listening to others respectfully signals care for their ends and “that person’s life as an end in itself,” then we are obligated to “inquir[e] how his ends, taken as policy, would affect us—as well as any of the people we have the luck (good or bad) not to be” (617). In other words, this obligation requires us to expend the same rigor in examining our own position as that of others.

Kraemer provides an example of how such discussions in Duffy might more usefully reflect this interplay between morality and ethics:

It has indeed been the moral side of the discussion that has been voiced. . . . Giving voice to ethical virtue can take as little as adding, to the sentence that follows, “and to themselves”: “To teach these particular practices is therefore to teach students to read, speak and write in ways that express their commitments to other human beings [and to themselves] (Duffy 224; bracketed material added). (618)

Kraemer addresses the problem of morality when it is imagined as and critiqued as a rigid universal code. He agrees with Duffy that a moral code adopted from the perspective of one group to the exclusion of others fails as a source of reasoning about the just. However, he contends that “writing pedagogy will be better informed . . . if morality is not dispensed with as a preexisting standard only” (612). Dismissing its attention to what might constitute the good for everyone and embracing only values attached to specific local contexts diminishes the power morality has to call ethics to account.

Apropos of the “‘perfect’ justice” that may result from too rigid an application of the universal, Kraemer turns to Aristotle’s idea of “decency,” which “corrects” laws that fail to establish the just universality they intend (620). Decency derives from the “practical wisdom” in play when morality “judg[es] in situations with that situation’s particulars in mind” (620).

Applied to the writing classroom, such decency, in Kraemer’s view, honors both individual decisions about “what a course well taught might mean” and claims about what such a course “might do for all students” (621). The tension between these goals is where Kraemer argues that we approach justice, a willingness, despite our individual ethics, to “try to establish terms with one another that everyone can agree are reasonable and fair” (621).


Leave a comment

Noguerón-Liu and Hogan. Transnationalism and Digital Composition. RTE, Feb. 2017. Posted 07/06/2017.

Noguerón-Liu, Silvia, and Jamie Jordan Hogan. “Remembering Michoacán: Digital Representation of the Homeland by Immigrant Adults and Adolescents.” Research in the Teaching of English 51.3 (2017): 267-89. Print.

Silvia Noguerón-Liu and Jamie Jordan Hogan present a study of the use of visual elements, including digital images and information, by adults and adolescents from immigrant communities as they constructed documents reflecting their transnational identities.

The authors worked with two women and two middle-grade students with ties to the Mexican state of Michoacán. The women were participants in three semester-long sessions of a “digital literacy program for immigrant adults” designed for parents of children in a largely Latinx community; the seventh-graders were enrolled in a “digital story-telling program” meant to help them succeed in U.S. classrooms. Both programs were located in a small Southern city (272-73).

Noguerón-Liu and Hogan applied three theoretical concepts. Transnationalism theory allowed investigation of how “individuals maintain multiple social networks and links to both their home and host communities” (269). They examined multimodal production through “critical artifactual literacies” that featured how the objects and material practices in which composition occurs affect the writing process through the various “affordances” offered by different “modes”; this study focused on the mode of images (270).The study further addressed the use of images and digital modes in the genre of testimonio, “a first-person narrative told by the protagonist and witness of events, usually recorded by an interlocutor,” which features a call to action (271-72). Throughout, the authors used a “participatory approach,” in which they worked side-by-side with the women and students to consider how the writers made choices and constructed meaning from the available resources (271).

A goal of the study was to assess “how transnational ties shaped various aspects of the digital writing process for all participants” (276). The authors argue that their study’s intergenerational focus usefully complicates common views that immigrant adults maintain the “cultural heritage” of their home communities while children develop more “hybrid practices” (270). Noguerón-Liu and Hogan found that the differences between the adults and adolescents they studied were more complex than generally assumed.

Interviews and results of focus groups were coded to investigate how participants maintained transnational ties, while coding of “field notes, interviews, and writing samples” permitted examination of how visual media “elicit[ed] discussion” during the composition process (275-76).

A major distinction revealed by the study was that the adults concentrated on sharing cultural information and revisiting memories while the adolescents focused on worries about safety and violence (278, 284). “Diana” created materials depicting church activities and “Mireya” elaborated on a mountain setting near her hometown that she wanted her daughter to see. In contrast, “Jackie” seemed caught up in the story of a bus accident that made her worry about her family’s safety, while “Diego” collected videos and references to drug-cartels and police corruption in his hometown (277-78).

Another important aspect of the study was the degree to which search-engine algorithms influenced participants’ options and choices. Searches foregrounded images from news reports, which most often showed violent events from the towns. Mireya abandoned digital searching for images because she considered violence irrelevant to the values she wanted to convey (280). After this experience, Noguerón-Liu and Hogan discussed options for reducing exposure to violence in the middle-grade sessions, but were unable to find completely satisfactory filters that still gave the students the information they needed (280).

The authors found dealing with emerging images of crime and violence a challenge in their roles as mentors and co-composers. Diego drew heavily on available videos of men with guns to ground his concerns about drug-cartel power in his community, and the researchers found themselves “interject[ing] [their] own assumptions about conflict” as they facilitated the students’ efforts (281). They found themselves among the interlocutors for participants’ testimonio about their experiences, ranging from witnessing miracles to reporting violence (283). This role required the researchers to “negotiate [their] own biases and concerns about crime-related information (which aligned with the concerns of adult participants) and the urgency in adolescents’ accounts about the danger their relatives faced back home” (283).

Noguerón-Liu and Hogan stress the diversity and agency that participants displayed as a result of their varying experiences with transnational networks. The two adults made specific decisions about which images they considered relevant to their purposes, consciously avoiding depictions of violence. Noguerón-Liu and Hogan caution that the prevalence of images of violence arising from news stories accessed by search engines can obscure other features of immigrants’ home communities that the immigrants themselves wish to foreground (286). At the same time, the researchers’ experiences as interlocutors for testimonio led them to argue that “transnational practices should not be reduced to symbols or folkloric dance, but can be expanded to include the solidarity, concern, and healing connecting individuals to their home countries” (286).

The authors note that their study highlights the “limitations of digital files” in ways that should concern all practitioners of multimodal composition instruction (285). Individual images juxtaposed without context can influence interpretation. The authors point to the importance of keyword choice as a means of expanding the available material from which multimodal writers can draw (285).

Noguerón-Liu and Hogan contend that “a listening-and-learning stance in practitioner inquiry” will best support agency and choice as transnational students decide how they want to depict their homelands and their ties to them. Teachers’ “[n]ew ways of listening and seeing” will facilitate immigrants’ efforts to “reimagine Michoacán and other conflict-ridden regions in complex and hopeful ways” (287).


Leave a comment

Litterio, Lisa M. Contract Grading: A Case Study. J of Writing Assessment, 2016. Posted 04/20/2017.

Litterio, Lisa M. “Contract Grading in a Technical Writing Classroom: A Case Study.” Journal of Writing Assessment 9.2 (2016). Web. 05 Apr. 2017.

In an online issue of the Journal of Writing Assessment, Lisa M. Litterio, who characterizes herself as “a new instructor of technical writing,” discusses her experience implementing a contract grading system in a technical writing class at a state university in the northeast. Her “exploratory study” was intended to examine student attitudes toward the contract-grading process, with a particular focus on how the method affected their understanding of “quality” in technical documents.

Litterio’s research into contract grading suggests that it can have the effect of supporting a process approach to writing as students consider the elements that contribute to an “excellent” response to an assignment. Moreover, Litterio contends, because it creates a more democratic classroom environment and empowers students to take charge of their writing, contract grading also supports critical pedagogy in the Freirean model. Litterio draws on research to support the additional claim that contract grading “mimic[s] professional practices” in that “negotiating and renegotiating a document” as students do in contracting for grades is a practice that “extends beyond the classroom into a workplace environment.”

Much of the research she reports dates to the 1970s and 1980s, often reflecting work in speech communication, but she cites as well models from Ira Shor, Jane Danielewicz and Peter Elbow, and Asao Inoue from the 2000s. In a common model, students can negotiate the quantity of work that must be done to earn a particular grade, but the instructor retains the right to assess quality and to assign the final grade. Litterio depicts her own implementation as a departure from some of these models in that she did make the final assessment, but applied criteria devised collaboratively by the students; moreover, her study differs from earlier reports of contract grading in that it focuses on the students’ attitudes toward the process.

Her Fall 2014 course, which she characterizes as a service course, enrolled twenty juniors and seniors representing seven majors. Neither Litterio nor any of the students were familiar with contract grading, and no students withdrew on learning from the syllabus and class announcements of Litterio’s grading intentions. At mid-semester and again at the end of the course, Litterio administered an anonymous open-ended survey to document student responses. Adopting the role of “teacher-researcher,” Litterio hoped to learn whether involvement in the generation of criteria led students to a deeper awareness of the rhetorical nature of their projects, as well as to “more involvement in the grading process and more of an understanding of principles discussed in technical writing, such as usability and document design.”

Litterio shares the contract options, which allowed students to agree to produce a stated number of assignments of either “excellent,” “great,” or “good” quality, an “entirely positive grading schema” that draws on Frances Zak’s claim that positive evaluations improved student “authority over their writing.”

The criteria for each assignment were developed in class discussion through an open voting process that resulted in general, if not absolute, agreement. Litterio provides the class-generated criteria for a resumé, which included length, format, and the expectations of “specific and strong verbs.” As the instructor, Litterio ultimately decided whether these criteria were met.

Mid-semester surveys indicated that students were evenly split in their preferences for traditional grading models versus the contract-grading model being applied. At the end of the semester, 15 of the 20 students expressed a preference for traditional grading.

Litterio coded the survey responses and discovered specific areas of resistance. First, some students cited the unfamiliarity of the contract model, which made it harder for them to “track [their] own grades,” in one student’s words. Second, the students noted that the instructor’s role in applying the criteria did not differ appreciably from instructors’ traditional role as it retained the “bias and subjectivity” the students associated with a single person’s definition of terms like “strong language.” Students wrote that “[i]t doesn’t really make a difference in the end grade anyway, so it doesn’t push people to work harder,” and “it appears more like traditional grading where [the teacher] decide[s], not us.”

In addition, students resisted seeing themselves and their peers as qualified to generate valid criteria and to offer feedback on developing drafts. Students wrote of the desire for “more input from you vs. the class,” their sense that student-generated criteria were merely “cosmetics,” and their discomfort with “autonomy.” Litterio attributes this resistance to the role of expertise to students’ actual novice status as well as to the nature of the course, which required students to write for different discourse communities because of their differing majors. She suggests that contract grading may be more appropriate for writing courses within majors, in which students may be more familiar with the specific nature of writing in a particular discipline.

However, students did confirm that the process of generating criteria made them more aware of the elements involved in producing exemplary documents in the different genres. Incorporating student input into the assessment process, Litterio believes, allows instructors to be more reflective about the nature of assessment in general, including the risk of creating a “yes or no . . . dichotomy that did not allow for the discussions and subjectivity” involved in applying a criterion. Engaging students throughout the assessment process, she contends, provides them with more agency and more opportunity to understand how assessment works. Student comments reflect an appreciation of having a “voice.”

This study, Litterio contends, challenges the assumption that contract grading is necessarily “more egalitarian, positive, [and] student-centered.” The process can still strike students as biased and based entirely on the instructor’s perspective, she found. She argues that the reflection on the relationship between student and teacher roles enabled by contract grading can lead students to a deeper understanding of “collective norms and contexts of their actions as they enter into the professional world.”


1 Comment

McAlear and Pedretti. When is a Paper “Done”? Comp. Studies, Fall 2016. Posted 03/02/2017.

McAlear, Rob, and Mark Pedretti. “Writing Toward the End: Students’ Perceptions of Doneness in the Composition Classroom.” Composition Studies 44.2 (2016): 72-93. Web. 20 Feb. 2017.

Rob McAlear and Mark Pedretti describe a survey to shed light on students’ conception of “doneness,” or when a piece of writing is completed.

McAlear and Pedretti argue that writing teachers tend to consider writing an ongoing process that never really ends. In their view, this approach values “process over product,” with the partial result that the issue of how a writing task reaches satisfactory completion is seldom addressed in composition scholarship (72). They contend that experienced writers acquire an ability “central to compositional practice” of recognizing that a piece is ready for submission, and writing instructors can help students develop their own awareness of what makes a piece complete.

A first step in this pedagogical process, McAlear and Pedretti write, is to understand how students actually make this decision about their college assignments (73). Their article seeks to determine what criteria students actually use and how these criteria differ as student writers move through different levels of college writing (73).

McAlear and Pedretti review the limited references to doneness in composition scholarship, noting that earlier resources like Erika Lindemann and Daniel Anderson’s A Rhetoric for Writing Teachers and Janet Emig’s work suggest that the most important factors are deadlines and a sense that the writer has nothing more to say. The authors find these accounts “unsatisfying” (74). Nancy Sommers, they state, recognizes that writing tasks do end but does not explore the criteria nor the “implications for those criteria” (75). Linda Flower and John R. Hayes, in their cognitive model, suggest that endings are determined by a writer’s “task representation,” with solution of a problem the supposed end point. Again, the authors find that knowing how writers “defin[e] a problem” does not explain how writers know they “have reached an adequate solution” (75).

One reason doneness has not been explicitly addressed, McAlear and Pedretti posit, is its possible relationship to “products” as the end of writing. Yet, they argue, “one of the implicit goals of teaching writing as a process is to get better products” (76). In their view, interrogating how writers come to regard their work as finished need not commit scholars to a “Big Theory” approach; “completion,” like process, can be rhetorically focused, responsive to specific audiences and purposes (76).

The authors surveyed 59 students in four first-year and four second-year writing courses at a Midwest research institution (78). The survey consisted of ten questions; analysis focused on the first two, asking about the student’s year and major, and on two questions, Q5 and Q10, that specifically asked how students decided a piece was finished. Question 5 was intended to elicit information about “a cognitive state,” whereas Question 10 asked about specific criteria (78).

Coding answers yielded three strategies: Internal, Criteria, and Process. “Internal” responses “linked to personal, emotional, or aesthetic judgments, such as feeling satisfied with one’s work or that the paper ‘flowed’” (79). Answers classified under “Criteria” referenced “empirical judgments of completion” such as meeting the requirements of the assignment (79). In “Process” answers, “any step in the writing process . . . was explicitly mentioned,” such as proofreading or peer review (79). McAlear and Pedretti coded some responses as combinations of the basic strategies, such as IP for “Internal-Process” or PC for “Process-Criteria” (80).

Survey responses indicated that first-year students tended to use a single strategy to determine doneness, with Internal or Process dominant. Nearly half of second-year students also used only one marker, but with a shift from Internal to Criteria strategies (79-80). Students responding to Question 10 claimed to use more than one strategy, perhaps because an intervening question triggered more reflection on their strategies (80). However, the authors were surprised that 33% of first-year students and 48% of second-year students did not mention Process strategies at all (80). Overall, first-year writers were more likely to report Internal or Process options, while second-year writers trended more to external Criteria (80-81).

McAlear and Pedretti found that for first-year students particularly, “Process” involved only “lower-order” strategies like proofreading (81). The authors recoded references to proofreading or correctness into a new category, “Surface.” With this revision, first-year students’ preference for Internal strategies “become even more prominent,” while second-year students’ use of Process strategies other than “Surface” was highlighted (82).

Study results do not support what McAlear and Pedretti consider a common perception that correctness and page length dictate students’ decisions about doneness (84). The authors posit that “students may be relying on equally simple, but qualitatively distinct, criteria” (84). First-year students commonly pointed to “proofreading and having nothing more to say,” while second-year students expressed concern with “meeting the criteria of the prompt” (84).

McAlear and Pedretti note that even among second-year students who had been exposed to more than one writing class, these responses indicate very little “awareness of rhetorical situation” (84). Although responding to the rhetorical situation of a college classroom, McAlear and Pedretti argue, second-year students interpret the actual expectations of a writing class simplistically (85). Considerations that writing teachers would hope for, like “Is this portion of my argument persuasive for my audience,” were completely missing (84). Moreover, many second-year students did not note Process at all, despite presumably having encountered the concept often (85).

McAlear and Pedretti propose that the shift away from Internal, affective markers to external, criteria-focused, albeit reductive, strategies may reflect a “loss of confidence” as students encountering unfamiliar discourses no longer trust their ability to judge their own success (85-86). The authors suggest that, because students cannot easily frame a rhetorical problem, “they do not know their endpoint” and thus turn to teachers for explicit instruction on what constitutes an adequate response (87).

For the authors, the moment when students move to external criteria and must articulate these criteria is an opportunity to introduce a vocabulary on doneness and to encourage attention to the different kinds of criteria suitable for different rhetorical contexts (88). Instructors can use reflective activities and examination of others’ decisions as revealed in their work to incorporate issues of doneness into rhetorical education as they explicitly provide a range of strategies, from internal satisfaction to genre-based criteria (88-89). Students might revise writing tasks for different genres and consider how, for example, completion criteria for an essay differ from those for a speech (90).

The authors propose that such attention to the question of doneness may shed light on problems like “writing anxiety,  procrastination, and even plagiarism” (84). Ultimately, they write, “knowing when to stop writing is a need that many of our students have, and one for which we have not yet adequately prepared them” (90).

 


Leave a comment

Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).