College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Shi, Matos, and Kuhn. Dialogue and Argument. JoWR, Spring 2019. Posted 06/15/2019.

Shi, Yuchen, Flora Matos, and Deanna Kuhn. “Dialog as a Bridge to Argumentative Writing.” Journal of Writing Research 11.1 (2019): 107-29. Web. 5 June 2019.

Yuchen Shi, Flora Matos, and Deanna Kuhn report on a study of a dialogic approach to argumentative writing conducted with sixth-graders at “an urban public middle school in an underserved neighborhood in a large Northeastern city in the United States” (113). The study replicates earlier research on the same curriculum, with added components to assess whether the intervention increased “meta-level understanding of the purpose and goals of evidence in argumentative writing” (112-13).

Noting that research has documented the degree to which students struggle with the cognitive demands of argumentative writing as opposed to narration (108), the authors report that while the value of discourse as a precursor to writing an argument has been recognized, much of the discourse studied has been at the “whole-classroom level” (108). In contrast, the authors’ intervention paired students so that they could talk “directly” with others who both shared and opposed their positions (108).

In the authors’ view, this process provided students with two elements that affect the success of written communication: “a clearly defined audience and a meaningful purpose” (108). They argue that this direct engagement with the topic and with an audience over a period of time improves on reading about a topic, which they feel students may do “disinterestedly” because they do not yet have a sense of what kind of evidence they may need (110). The authors’ dialogic intervention allows students to develop their own questions as they become aware of the arguments they will have to make (110).

Further, the authors maintain, the dialogic exchange linking individual students “removes the teacher” and makes the process student-centered (109).

Claiming that the ability to produce “evidence-based claims” is central to argument, the authors centered their study on the relation between claims and evidence in students’ discussions and in their subsequent writing (110). Their model, they write, allowed them to see a developmental sequence as students were first most likely to choose evidence that supported their own position, only later beginning to employ evidence that “weaken[s] the opposing claim” (111). Even more sophisticated approaches to evidence, which the authors label “weaker my” and “support other,” develop more slowly if at all (111-12).

Two class were chosen to participate, one as the experimental group (22 students) and one as a comparison group (27 students). The curriculum was implemented in “twice-weekly 40-minute class sessions” that continued in “four cycles” throughout the school year (114). Each cycle began a new topic; the four topics were selected from a list because students seemed equally divided in their views on those issues (114).

The authors divided their process into Pregame, Game, and Endgame sections. In the Pregame, students in small groups generated reasons in support of their position. In the Game, student pairs sharing a position dialogued electronically with “a different opposing pair at each session” (115). During this section, students generated their own “evidence questions” which the researchers answered by the next session; the pairs were given other evidence in Q&A format. The Endgame consisted of a debate, which was then scored and a winning side designated (115). Throughout, students constructed reflection pieces; electronic transcripts preserved the interactions (115).

At the end of each cycle, students wrote individual papers. The comparison group also wrote an essay on the fourth topic, whether students should go directly to college from high school or work for a year. For this essay, students in the both groups were provided with evidence only at the end of the cycle. This essay was used for the final assessment (116-17).

Other elements assessed included whether students could recall answers to 12 evidence questions, in order to determine if differences in the use of evidence in the two groups was a function of superior memory of the material (123). A second component was a fifth essay written by the experimental group on whether teens accused of serious crimes should be tried as adults or juveniles (118). The authors wanted to assess whether the understanding of claims and evidence cultivated during the curriculum informed writing on a topic that had not been addressed through the dialogic intervention (118).

For the assessment, the researchers considered “a claim together with any reason and/or evidence supporting it” as an “idea unit” (118). These units were subcategorized as “either evidence-based or non-evidence-based.” Analyzing only the claims that contained evidence, the researchers further distinguished between “functional” and “non-functional” evidence-based claims. Functional claims were those where there was a clear written link between the evidence and claim. Only the use of functional claims was assessed. (118).

Results indicated that while the number of idea units and evidence-based claims did not vary significantly across the groups, the experimental group was significantly more successful in including functional evidence-based claims (120). Also, the intervention encouraged significantly more use of “weaken-other” claims, which the writers characterize as “a more demanding skill commonly neglected by novice writers” (120). Students did not show progress in using “weaken-own” or “support-other” evidence (121).

With the intention of determining the intervention’s effects on students’ meta-level awareness about evidence in arguing, researchers discovered that the groups did not vary in the kinds of evidence they would like most to see, with both choosing “support-own.” However, the experimental group was much more likely to state that “weaken-other” evidence was the type “they would like to see second most” (122). The groups were similar in students’ ability to recall evidence, in the authors’ view indicating that superior recall in one group or the other did not explain the results (125).

Assessment of the essay on the unfamiliar topic was hampered by an even smaller sample size and the fact that the two groups wrote on different topics. The writers report that 54% of the experimental-group students made support-own or weaken-other claims, but that the number of such claims decreased to a frequency similar to that of the comparison group on the college/work topic (124).

The authors argue that increased use of more sophisticated weaken-other evidence points to higher meta-awareness of evidence as a component of argument, but that students could show more growth as measured by their ability to predict the kind of evidence they would need or use (125).

Noting the small sample size as a limitation, the authors suggest that both the dialogic exchange of their curriculum and the students’ “deep engagement” with topics contributed to the results they recorded. They suggest that “[a]rguing to learn” through dialogue and engagement can be an important pedagogical activity because of the discourse and cognitive skills these activities develop (126).


Leave a comment

Vetter, Matthew A. Editing Wikipedia as Pedagogy for Cultural Critique. CE, May 2018. Posted 05/22/2018.

Vetter, Matthew A. “Teaching Wikipedia: Appalachian Rhetoric and the Encyclopedic Politics of Representation.” College English 80.5 (2018): 397-422. Print.

Matthew A. Vetter writes about a study in a junior-level rhetoric and writing course in which he used Wikipedia as a focus for the course and as a primary teaching tool (399). He argues that designing a curriculum in which students actively participate in Wikipedia editing can serve dual goals of meeting general education and composition learning outcomes while also introducing students to cultural critique (400).

The course, which took place in a university in a region of Ohio that is considered part of Appalachia, used depictions of Appalachia in media and in Wikipedia to introduce issues of cultural representation while also allowing students to gain from the particular affordances Wikipedia offers (399).

Vetter notes that while Wikipedia is often excoriated by college and university instructors, scholarship in composition has credited the project with important qualities useful for teaching writing (397, 402). Scholars claim that Wikipedia provides an “authentic” writing environment that engages students with real, potentially responsive audiences in the collaborative construction of knowledge (397). Students working in this environment can “deconstruct authority in public and ‘published’ texts” and can gain firsthand experience in the process of editing and revision (397).

Vetter recounts as well critiques that challenge Wikipedia’s claim to provide “universal access and representation” (398). He cites statistics indicating that the “editorship” is “overwhelmingly make and homogenous” (398). Further, the site marginalizes certain geographic and cultural locations and issues through lack of representation and often through representation from an “outsider perspective” (398).

For Vetter, this disparity in representation affects the ways Wikipedia addresses marginalized areas of Western culture, such as Appalachia. Involving students with Wikipedia’s depiction of Appalachia, in Vetter’s view, gives them access to the ways that representation functions through media and rhetoric and allows them to see their ability to intervene through writing as a potential force for change (399).

Vetter found that a significant minority of his students considered themselves connected to Appalachia (407); 17 students participated in the study (401). The course design allowed all students to engage both with the issue of representation of Appalachia in media and with the rhetorical nature and “cultural politics” of Wikipedia as a source of information (416), with implications for how rhetoric and writing construct realities.

Students began by examining depictions of Appalachia in mainstream media, moved on to group genre analysis of Wikipedia articles, and finally chose Wikipedia pieces on Appalachia to edit, drawing on their research as well as their personal experiences as residents of an Appalachian region (400). Students also wrote two in-class “process logs,” one asking them to reflect on what they had learned about rhetorical treatment of Appalachia and one calling for consideration of how their engagement with Wikipedia had changed as a result of the course (401). Coding of the process logs allowed Vetter to detect themes shared across many responses.

Vetter explores scholarship on teaching with Wikipedia within composition studies, finding an interest in the ways using Wikipedia as a site for writing can enable a shift from consumption to production (403). He argues that Wikipedia is an example of a “[c]ommunity-based pedagog[y]” that, by offering “exposure to multiple authorities and audiences,” contributes to students’ rhetorical knowledge (403). In Vetter’s view, scholarship has tended to focus on the contribution to general learning outcomes enabled by Wikipedia-based assignments; he contends that this focus “should be expanded” to exploit what the site can teach about the rhetorical nature of representation and about the processes that result in the marginalization of “cultures and identities” (404).

The first class project, examining representations of Appalachia in mainstream sources, asked students to examine Appalachia as a “social invention” created through writing (404). This “symbolic construction” (404) of the region, Vetter argues, shifts attention from the “material realities” experienced by inhabitants (405). Study of these material realities, Vetter contends, can lead to more nuanced awareness of the diversity of the region and to a greater appreciation of a range of literacies that characterize individuals (405-06). Vetter’s course and study transcend the “denaturalization” that scholarship begins by encouraging a “method of critical praxis that contributes to the reshaping of cultural narratives” as students not only study how stereotypes are created and persist but resist these stereotypes by actively editing Wikipedia’s Appalachia sites (406).

Analysis of the first process log revealed that students recognized the effects of problematic representation of Appalachia; 88% also noted “the social-epistemic functions of rhetoric and writing” (408, 409). Their study of media depictions of the region also emphasized for students how reliance on outsiders for representation erased the realities experienced by people closer to the region (411).

Vetter notes that developers in Wikipedia are aware that work remains to be done to improve the depiction of Appalachia. Wikiprojects, “dedicated task forces” that strive to improve Wikipedia, list “more than 40 articles in need of development or major reorganization” within Wikiproject Appalachia (412). Students were able to draw on these articles and on resources and support provided in the Wikiproject’s “talk” page to meet the course requirements (412-13). Vetter discusses the need to move beyond word counts in order to assess student work, because Wikipedia encourages concision and because students must collaborate with other editors to have their work included (413).

The second process log suggested that genre analysis and exposure to Wikipedia itself had given students better understanding and familiarity with the exigencies of working in the site. Some students wrote that professors in earlier classes who had imposed “outright bans” on the use of Wikipedia for research failed to understand how a critical understanding of the site could make it a productive research source (415-16). Vetter contends that a more nuanced understanding of Wikipedia and a well-structured curriculum using the site could allow academics to encourage the kinds of improvements they believe Wikipedia needs, including an increase in the diversity of contributors (416).

Three of the 17 students reported difficulty getting their edits accepted, reporting that experienced editors served as gatekeepers on “popular” topics while more marginalized topics were hard to research because of a lack of well-documented information. Vetter contends that Wikipedia’s insistence on “published and verifiable sources” will always tend to exclude the important insights that come from the direct experience of those familiar with a region or topic (419). While the “distributed model” of “Commons-Based Peer Production” in place at Wikipedia does allow many users to “come together to collaboratively and incrementally build a global knowledge source,” this model simultaneously “deemphasize[s] and devalue[s] the place of local knowledge production” (419).

In Vetter’s view, student engagement with Wikipedia can alert them to the ways that various types of representation can misinform while empowering them to recognize their own writing and rhetoric as interventions for change.

 


Leave a comment

Hayden, Wendy. Archival Research as Teaching Methodology. CE, Nov. 2017. Posted 01/11/2018.

Hayden, Wendy. “AND GLADLY TEACH: The Archival Turn’s Pedagogical Turn.” College English 80.2 (2017): 133-54. Print.

Wendy Hayden proposes archival research as a pedagogical method to help undergraduates develop a nuanced understanding of academic research. She writes in response to accounts of student research from both students and faculty that depict the usual research process as one of collecting information from sources and reproducing it with attention to mechanics of documentation and organization but with little input or engagement from the student writer (133). Hayden cites scholarship advocating assignments that foreground primary research as a way to address this problem. In her view, archival research is an important form of such primary research (134).

Hayden anchors her discussion in a course she taught for upper-level majors in English, education, and political science. The specific topic of the course was “the archival turn in rhetoric and composition studies” (140). Hayden discusses the challenges of covering all aspects of archival research in a single semester, arguing that even including such research in a single unit provides many benefits. In her own case, she was able to supply an “immersion” experience by focusing on archives throughout a semester (140). She reports that she decided to “survey the field’s archival turn and then throw everything I could into the course to see what happened” (141). Students explored both physical and digital archives, met with guest speakers, visited repositories, and created final projects that followed up on some aspect of their research experiences (141).

According to Hayden, a major benefit of archival research is that it casts education as an “inquiry-based” activity (135). This inquiry, she contends, allows students to enhance their close-reading skills and to develop projects that move beyond “rehash[ing] existing scholarship” (135). Archivists and faculty incorporating this methodology report “increased student engagement” as students find themselves able to contribute to knowledge in a field (135).

Hayden stresses that archival pedagogies inculcate feminist values of collaboration, cooperation, and invitation (135-36) as well as activism (140). Citing a number of practitioners who have published about archival methodologies in the classroom and including many examples of assignments, Hayden proposes three components of this research: recovery, rereading, and creation of new archives (136).

Students exploring archival material to recover forgotten voices and missing histories can be encouraged to see research as an “ongoing endeavor rather than a set number of citations” (Tom Keegan and Kelly McElroy, qtd. in Hayden 136). Hayden argues that experiences in digital archives foreground the collaborative nature of such research, especially when students can annotate or contribute to the materials (137). Digital archives, which can be defined either narrowly or broadly, can be connected to local issues that enhance student engagement (137). Recovery assignments include opportunities for students to share their findings with larger publics, building their confidence in the value of their own voices (137).

“(Re)reading the archive” (138) encourages student attention to the constructed, partial nature of the materials as they begin to question why some things are included and others left out (138). Hayden writes that such questioning leads to an understanding of “public memory as a process” that, in the words of Jane Greer and Laurie Grobman, reveals “the fluidity of our shared memories” (qtd. in Hayden 138). According to Hayden, this understanding of the rhetoricity of archives inspires what Jessica Enoch and Pamela VanHaitsma call an “archival literacy” (138) that points to the archivist’s responsibility in assembling the components of memory (140).

Creating their own archives, as in the assignments Hayden reports, further emphasizes for students the complex decisions and ethical challenges of joining an archival conversation (139). Students’ agency in collecting and organizing materials of interest to them permits increased connections between history and the students’ own lives while also providing opportunities for the feminist value of activism (140). Hayden cites Tarez Graban and Shirley K. Rose to propose a “networked archive” in which the feminist practices of collaboration and invitation are paramount (140).

Discussing her own class, Hayden finds that “the central question and focus that emerged . . . was the nature of academic study as a personalized inquiry and how undergraduate scholars are central to that inquiry” (141). She recounts extensive collaboration with a librarian, with guest speakers, with archivists throughout the city, and even with authors of texts on archival research (141-42). In the process, all participants, including the students, cooperated as “agents” in exploring, documenting, and building archives (142).

Hayden’s students read archives she made available, pursued questions of individual interest that arose from this exploration, and completed a final project of their choosing, reflecting on each step in blog posts that themselves became a class archive (143). Hayden found that students were more comfortable in physical archives than digital ones, which students reported finding “overwhelming” (142). The author notes students’ discovery that acquiring information was less challenging than selecting and organizing the voluminous material available (142).

Throughout her discussion, Hayden provides examples of student projects, many of which, she argues, deepened students’ awareness of the rhetorical and activist nature of archives and the work involved in exploring and creating them. One student, for example, collected voices of women who had returned to college and “advocat[ed] for resources based on what these women need to succeed” (145).

Hayden writes that uncertainty inherent in archival research encouraged students to be open to “shifts” in the direction of their discoveries as they found some searches to be “dead-end question[s]” (145). These experiences further led students to often see their course research as a component of a larger, ongoing project, deflecting the purpose of research from a finished product to a process and therefore permitting them to take more risks (146). In turn, this experience, in Hayden’s view, engaged students in a more authentic scholarly conversation than that often depicted in textbooks, which might rely on sources like newspaper op-eds rather than actual academic exchanges (147).

An additional value Hayden cites is the way that archival research defines scholarly research as “about people” (147). Thinking about their obligations to their subjects personalized the process for students; among the results was an increased tendency to develop their own ideas and values through their work, as well as to accord more interest and respect to the contributions of peers (148). Students became excited about publishing their work, in the process moving beyond the “more traditional scholarly paper” (148).

Hayden closes with the voice of student Julie Sorokurs, who writes, “I marveled at how easily and effectively an academic pursuit could become a project of love and genuine curiosity” (qtd. in Hayden 149).


Leave a comment

Stewart, Mary K. Communities of Inquiry in Technology-Mediated Activities. C&C, Sept. 2017. Posted 10/20/2017.

Stewart, Mary K. “Communities of Inquiry: A Heuristic for Designing and Assessing Interactive Learning Activities in Technology-Mediated FYC.” Computers and Composition 45 (2017): 67-84. Web. 13 Oct. 2017.

Mary K. Stewart presents a case study of a student working with peers in an online writing class to illustrate the use of the Community of Inquiry framework (CoI) in designing effective activities for interactive learning.

Stewart notes that writing-studies scholars have both praised and questioned the promise of computer-mediated learning (67-68). She cites scholarship contending that effective learning can take place in many different environments, including online environments (68). This scholarship distinguishes between “media-rich” and “media-lean” contexts. Media-rich environments include face-to-face encounters and video chats, where exchanges are immediate and are likely to include “divergent” ideas, whereas media-lean situations, like asynchronous discussion forums and email, encourage more “reflection and in-depth thinking” (68). The goal of an activity can determine which is the better choice.

Examining a student’s experiences in three different online environments with different degrees of media-richness leads Steward to argue that it is not the environment or particular tool that results in the success or failure of an activity as a learning experience. Rather, in her view, the salient factor is “activity design” (68). She maintains that the CoI framework provides “clear steps” that instructors can follow in planning effective activities (71).

Stewart defined her object of study as “interactive learning” (69) and used a “grounded theory” methodology to analyze data in a larger study of several different course types. Interviews of instructors and students, observations, and textual analysis led to a “core category” of “outcomes of interaction” (71). “Effective” activities led students to report “constructing new knowledge as a result of interacting with peers” (72). Her coding led her to identify “instructor participation” and “rapport” as central to successful outcomes; reviewing scholarship after establishing her own grounded theory, Stewart found that the CoI framework “mapped to [her] findings” (71-72).

She reports that the framework involves three components: social presence, teaching presence, and cognitive presence. Students develop social presence as they begin to “feel real to one another” (69). Stewart distinguishes between social presence “in support of student satisfaction,” which occurs when students “feel comfortable” and “enjoy working” together, and social presence “in support of student learning,” which follows when students actually value the different perspectives a group experience offers (76).

Teaching presence refers to the structure or design that is meant to facilitate learning. In an effective CoI activity, social and teaching presence are required to support cognitive presence, which is indicated by “knowledge construction,” specifically “knowledge that they would not have been able to construct without interacting with peers” (70).

For this article, Stewart focused on the experiences of a bilingual Environmental Studies major, Nirmala, in an asynchronous discussion forum (ADF), a co-authored Google document, and a synchronous video webinar (72). She argues that Nirmala’s experiences reflect those of other students in the larger study (72).

For the ADF, students were asked to respond to one of three questions on intellectual property, then respond to two other students who had addressed the other questions. The prompt specifically called for raising new questions or offering different perspectives (72). Both Nirmala and Steward judged the activity as effective even though it occurred in a media-lean environment because in sharing varied perspectives on a topic that did not have a single solution, students produced material that they were then able to integrate into the assigned paper (73):

The process of reading and responding to forum posts prompted critical thinking about the topic, and Nirmala built upon and extended the ideas expressed in the forum in her essay. . . . [She] engaged in knowledge construction as a result of interacting with her peers, which is to say she engaged in “interactive learning” or a “successful community of inquiry.” (73)

Stewart notes that this successful activity did not involve the “back-and-forth conversation” instructors often hope to encourage (74).

The co-authored paper was deemed not successful. Stewart contends that the presence of more immediate interaction did not result in more social presence and did not support cognitive presence (74). The instructions required two students to “work together” on the paper; according to Nirmala’s report, co-authoring became a matter of combining and editing what the students had written independently (75). Stewart writes that the prompt did not establish the need for exploration of viewpoints before the writing activity (76). As a result, Nirmala felt she could complete the assignment without input from her peer (76).

Though Nirmala suggested that the assignment might have worked better had she and her partner met face-to-face, Stewart argues from the findings that the more media-rich environment in which the students were “co-present” did not increase social presence (75). She states that instructors may tend to think that simply being together will encourage students to interact successfully when what is actually needed is more attention to the activity design. Such design, she contends, must specifically clarify why sharing perspectives is valuable and must require such exploration and reflection in the instructions (76).

Similarly, the synchronous video webinar failed to create productive social or cognitive presence. Students placed in groups and instructed to compose group responses to four questions again responded individually, merely “check[ing]” each other’s answers.  Nirmala reports that the students actually “Googled the answer and, like, copy pasted” (Nirmala, qtd. in Stewart 77). Steward contends that the students concentrated on answering the questions, skipping discussion and sharing of viewpoints (77).

For Stewart, these results suggest that instructors should be aware that in technology-mediated environments, students take longer to become comfortable with each other, so activity design should build in opportunities for the students to form relationships (78). Also, prompts can encourage students to share personal experiences in the process of contributing individual perspectives. Specifically, according to Stewart, activities should introduce students to issues without easy solutions and focus on why sharing perspectives on such issues is important (78).

Stewart reiterates her claim that the particular technological environment or tool in use is less important than the design of activities that support social presence for learning. Even in media-rich environments, students placed together may not effectively interact unless given guidance in how to do so. Stewart finds the CoI framework useful because it guides instructors in creating activities, for example, by determining the “cognitive goals” in order to decide how best to use teaching presence to build appropriate social presence. The framework can also function as an assessment tool to document the outcomes of activities (79). She provides a step-by-step example of CoI in use to design an activity in an ADF (79-81).

 


Leave a comment

Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


1 Comment

West-Puckett, Stephanie. Digital Badging as Participatory Assessment. CE, Nov. 2016. Posted 11/17/2016.

Stephanie West-Puckett presents a case study of the use of “digital badges” to create a local, contextualized, and participatory assessment process that works toward social justice in the writing classroom.

She notes that digital badges are graphic versions of those earned by scouts or worn by members of military groups to signal “achievement, experience, or affiliation in particular communities” (130). Her project, begun in Fall 2014, grew out of Mozilla’s free Open Badging Initiative and the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC) that funded grants to four universities as well as to museums, libraries, and community partnerships to develop badging as a way of recognizing learning (131).

West-Puckett employed badges as a way of encouraging and assessing student engagement in the outcomes and habits of mind included in such documents as the Framework for Success in Postsecondary Writing, the Outcomes Statements for First-Year Composition produced by the Council of Writing Program Administrators, and her own institution’s outcomes statement (137). Her primary goal is to foster a “participatory” process that foregrounds the agency of teachers and students and recognizes the ways in which assessment can influence classroom practice. She argues that such participation in designing and interpreting assessments can address the degree to which assessment can drive bias and limit access and agency for specific groups of learners (129).

She reviews composition scholarship characterizing most assessments as “top-down” (127-28). In these practices, West-Puckett argues, instruments such as rubrics become “fetishized,” with the result that they are forced upon contexts to which they are not relevant, thus constraining the kinds of assignments and outcomes teachers can promote (134). Moreover, assessments often fail to encourage students to explore a range of literacies and do not acknowledge learners’ achievements within those literacies (130). More valid, for West-Puckett, are “hyperlocal” assessments designed to help teachers understand how students are responding to specific learning opportunities (134). Allowing students to join in designing and implementing assessments makes the learning goals visible and shared while limiting the power of assessment tools to marginalize particular literacies and populations (128).

West-Puckett contends that the multimodal focus in writing instruction exacerbates the need for new modes of assessment. She argues that digital badges partake of “the primacy of visual modes of communication,” especially for populations “whose bodies were not invited into the inner sanctum of a numerical and linguistic academy” (132). Her use of badges contributes to a form of assessment that is designed not to deride writing that does not meet the “ideal text” of an authority but rather to enlist students’ interests and values in “a dialogic engagement about what matters in writing” (133).

West-Puckett argues for pairing digital badging with “critical validity inquiry,” in which the impact of an assessment process is examined through a range of theoretical frames, such as feminism, Marxism, or queer or disability theory (134). This inquiry reveals assessment’s role in sustaining or potentially disrupting entrenched views of what constitutes acceptable writing by examining how such views confer power on particular practices (134-35).

In West-Puckett’s classroom in a “mid-size, rural university in the south” with a high percentage of students of color and first-generation college students (135), small groups of students chose outcomes from the various outcomes statements, developed “visual symbols” for the badges, created a description of the components and value of the outcomes for writing, and detailed the “evidence” that applicants could present from a range of literacy practices to earn the badges (137). West-Puckett hoped that this process would decrease the “disconnect” between her understanding of the outcomes and that of students (136), as well as engage students in a process that takes into account the “lived consequences of assessment” (141): its disparate impact on specific groups.

The case study examines several examples of badges, such as one using a compass to represent “rhetorical knowledge” (138). The group generated multimodal presentations, and applicants could present evidence in a range of forms, including work done outside of the classroom (138-39). The students in the group decided whether or not to award the badge.

West-Puckett details the degree to which the process invited “lively discussion” by examining the “Editing MVP” badge (139). Students defined editing as proofreading and correcting one’s own paper but visually depicted two people working together. The group refused the badge to a student of color because of grammatical errors but awarded it to another student who argued for the value of using non-standard dialogue to show people “‘speaking real’ to each other” (qtd. in West-Puckett 140). West-Puckett recounts the classroom discussion of whether editing could be a collaborative effort and when and in what contexts correctness matters (140).

In Fall 2015, West-Puckett implemented “Digital Badging 2.0” in response to her concerns about “the limited construct of good writing some students clung to” as well as how to develop “badging economies that asserted [her] own expertise as a writing instructor while honoring the experiences, viewpoints, and subject positions of student writers” (142). She created two kinds of badging activities, one carried out by students as before, the other for her own assessment purposes. Students had to earn all the student-generated badges in order to pass, and a given number of West-Puckett’s “Project Badges” to earn particular grades (143). She states that she privileges “engagement as opposed to competency or mastery” (143). She maintains that this dual process, in which her decision-making process is shared with the students who are simultaneously grappling with the concepts, invites dialogue while allowing her to consider a wide range of rhetorical contexts and literacy practices over time (144).

West-Puckett reports that although she found evidence that the badging component did provide students an opportunity to take more control of their learning, as a whole the classes did not “enjoy” badging (145). They expressed concern about the extra work, the lack of traditional grades, and the responsibility involved in meeting the project’s demands (145). However, in disaggregated responses, students of color and lower-income students viewed the badge component favorably (145). According to West-Puckett, other scholars have similarly found that students in these groups value “alternative assessment models” (146).

West-Puckett lays out seven principles that she believes should guide participatory assessment, foregrounding the importance of making the processes “open and accessible to learners” in ways that “allow learners to accept or refuse particular identities that are constructed through the assessment” (147). In addition, “[a]ssessment artifacts,” in this case badges, should be “portable” so that students can use them beyond the classroom to demonstrate learning (148). She presents badges as an assessment tool that can embody these principles.


Leave a comment

Zuidema and Fredricksen. Preservice Teachers’ Use of Resources. August RTE. Posted 09/25/2016.

Zuidema, Leah A., and James E. Fredricksen. “Resources Preservice Teachers Use to Think about Student Writing.” Research in the Teaching of English 51.1 (2016): 12-36. Print.

Leah A. Zuidema and James E. Fredricksen document the resources used by students in teacher-preparation programs. The study examined transcripts collected from VoiceThread discussions among 34 preservice teachers (PSTs) (16). The PSTs reviewed and discussed papers provided by eighth- and ninth-grade students in Idaho and Indiana (18).

Zuidema and Fredricksen define “resource” as “an aid or source of evidence used to help support claims; an available supply that can be drawn upon when needed” (15). They intend their study to move beyond determining what writing teachers “get taught” to discovering what kinds of resources PSTs actually use in developing their theories and practices for K-12 writing classrooms (13-14).

The literature review suggests that the wide range of concepts and practices presented in teacher-preparation programs varies depending on local conditions and is often augmented by students’ own educational experiences (14). The authors find very little systematic study of how beginning teachers actually draw on the methods and concepts their training provides (13).

Zuidema and Fredricksen see their study as building on prior research by systematically identifying the resources teachers use and assigning them to broad categories to allow a more comprehensive understanding of how teachers use such sources to negotiate the complexities of teaching writing (15-16).

To gather data, the researchers developed a “community of practice” by building their methods courses around a collaborative project focusing on assessing writing across two different teacher-preparation programs (16-17). Twenty-six Boise State University PSTs and 8 from a small Christian college, Dordt, received monthly sets of papers from the eighth and ninth graders, which they then assessed individually and with others at their own institutions.

The PSTs then worked in groups through VoiceThread to respond to the papers in three “rounds,” first “categoriz[ing]” the papers according to strengths and weaknesses; then categorizing and prioritizing the criteria they relied on; and finally “suggest[ing] a pedagogical plan of action” (19). This protocol did not explicitly ask PSTs to name the resources they used but revealed these resources via the transcriptions (19).

The methods courses taught by Zuidema and Fredricksen included “conceptual tools” such as “guiding frameworks, principles, and heuristics,” as well as “practical tools” like “journal writing and writer’s workshop” (14). PSTs read professional sources and participated in activities that emphasized the value of sharing writing with students (17). Zuidema and Fredricksen contend that a community of practice in which professionals explain their reasoning as they assess student writing encourages PSTs to “think carefully about theory-practice connections” (18).

In coding the VoiceThread conversations, the researchers focused on “rhetorical approaches to composition” (19), characterized as attention to “arguments and claims . . . , evidence and warrants,” and “sources of support” (20). They found five categories of resources PSTs used to support claims about student writing:

  • Understanding of students and student writing (9% of instances)
  • Knowledge of the context (10%)
  • Colleagues (11%)
  • PSTs’ roles as writers, readers, and teachers (17%)
  • PSTs’ ideas and observations about writing (54%) (21)

In each case, Zuidema and Fredricksen developed subcategories. For example, “Understanding of students and student writing” included “Experience as a student writer” and “Imagining students and abilities,” while “Colleagues” consisted of “Small-group colleagues,” “More experienced teachers,” “Class discussion/activity,” and “Professional reading” (23).

Category 1, “Understanding of students and student writing,” was used “least often,” with PSTs referring to their own student-writing experiences only six times out of 435 recorded instances (24). The researchers suggest that this category might have been used more had the PSTs been able to interact with the students (24). They see “imagining” how students are reacting to assignments important as a “way [teachers] can develop empathy” and develop interest in how students understand writing (24).

Category 2, “Knowledge of Context as a Resource,” was also seldom used. Those who did refer to it tended to note issues involving what Zuidema and Fredricksen call GAPS: rhetorical awareness of “genre, audience, purpose, and situation of the writing” (25). Other PSTs noted the role of the prompt in inviting strong writing. The researchers believe these types of awarenesses encourage more sophisticated assessment of student work (25).

The researchers express surprise that Category 3, “Colleagues,” was used so seldom (26). Colleagues in the small groups were cited most often, but despite specific encouragement to do so, several groups did not draw on this resource. Zuidema and Fredricksen note that reference to the resource increased through the three rounds. Also surprising was the low rate of reference to mentors and experienced teachers, to class discussion, activities, and assignments: Only one participant mentioned a required “professional reading” as a resource (27). Noting that the PSTs may have used concepts from mentors and class assignments without explicitly naming them, the authors note prior research suggesting that reference to outside sources can be perceived as undercutting the authority conferred by experience (27).

In Category 4, “Roles as Resources,” Zuidema and Fredricksen note that PSTs were much more likely to draw on their roles as readers or teachers than as writers (28). Arguing that a reader perspective augured an awareness of the importance of audience, the researchers note that most PSTs in their study perceived their own individual reader responses as most pertinent, suggesting the need to emphasize varied perspectives readers might bring to a text (28).

Fifty-four percent of the PSTs references invoked “Writing as a Resource” (29). Included in this category were “imagined ideal writing,” “comparisons across student writing,” “holistic” references to “whole texts,” and “excerpts” (29-31). In these cases, PSTs’ uses of the resources ranged from “a rigid, unrhetorical view of writing” in which “rules” governed assessment (29) to a more effective practice that “connected [student writing] with a rhetorical framework” (29). For example, the use of excerpts could be used for “keeping score” on “checklists” or as a means of noting patterns and suggesting directions for teaching (31). Comparisons among students and expectations for other students at similar ages, Zuidema and Fredricksen suggest, allowed some PSTs to reflect on developmental issues, while holistic evaluation allowed consideration of tone, audience, and purpose (30).

Zuidema and Fredricksen conclude that in encouraging preservice teachers to draw on a wide range of resources, “exposure was not enough” (32), and “[m]ere use is not the goal” (33). Using their taxonomy as a teaching tool, they suggest, may help PSTs recognize the range of resources available to them and “scaffold their learning” (33) so that they will be able to make informed decisions when confronted with the multiple challenges inherent in today’s diverse and sometimes “impoverished” contexts for teaching writing (32).