College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Limpo and Alves. Effects of Beliefs about “Writing Skill Malleability” on Performance. JoWR 2017. Posted 11/24/2017.

Limpo, Teresa, and Rui A. Alves. “Relating Beliefs in Writing Skill Malleability to Writing Performance: The Mediating Roles of Achievement Goals and Self-Efficacy.” Journal of Writing Research 9.2 (2017): 97-125. Web. 15 Nov. 2017.

Teresa Limpo and Rui A. Alves discuss a study with Portuguese students designed to investigate pathways between students’ beliefs about writing ability and actual writing performance. They use measures for achievement goals and self-efficacy to determine how these factors mediate between beliefs and performance. Their study goals involved both exploring these relationships and assessing the validity and reliability of the instruments and theoretical models they use (101-02).

The authors base their approach on the assumption that people operate via “implicit theories,” and that central to learning are theories that see “ability” as either “incremental,” in that skills can be honed through effort, or as an “entity” that cannot be improved despite effort (98). Limpo and Alves argue that too little research has addressed how these beliefs about “writing skill malleability” influence learning in the specific “domain” of writing (98).

The authors report earlier research that indicates that students who see writing as an incremental skill perform better in intervention studies. They contend that the “mechanisms” through which this effect occurs have not been thoroughly examined (99).

Limpo and Alves apply a three-part model of achievement goals: “mastery” goals involve the desire to improve and increase competence; “performance-approach” goals involve the desire to do better than others in the quest for competence; and “performance-avoidance” goals manifest as the desire to avoid looking incompetent or worse than others (99-100). Mastery and performance-approach goals correlate positively because they address increased competence, but performance-approach and performance-avoidance goals also correlate because they both concern how learners see themselves in comparison to others (100).

The authors write that “there is overall agreement” among researchers in this field that these goals affect performance. Students with mastery goals display “mastery-oriented learning patterns” such as “use of deep strategies, self-regulation, effort and persistence, . . . [and] positive affect,” while students who focus on performance avoidance exhibit “helpless learning patterns” including “unwillingness to seek help, test anxiety, [and] negative affect” (100-01). Student outcomes with respect to performance-approach goals were less clear (101). The authors hope to clarify the role of self-efficacy in these goal choices and outcomes (101).

Limpo and Alves find that self-efficacy is “perhaps the most studied variable” in examinations of motivation in writing (101). They refer to a three-part model: self-efficacy for “conventions,” or “translating ideas into linguistic forms and transcribing them into writing”; for “ideation,” finding ideas and organizing them, and for “self-regulation,” which involves knowing how to make the most of “the cognitive, emotional, and behavioral aspects of writing” (101). They report associations between self-efficacy, especially for self-regulation, and mastery goals (102). Self-efficacy, particularly for conventions, has been found to be “among the strongest predictors of writing performance” (102).

The authors predicted several “paths” that would illuminate the ways in which achievement goals and self-efficacy linked malleability beliefs and performance. They argue that their study contributes new knowledge by providing empirical data about the role of malleability beliefs in writing (103).

The study was conducted among native Portuguese speakers in 7th and 8th grades in a “public cluster of schools in Porto” that is representative of the national population (104). Students received writing instruction only in their Portuguese language courses, in which teachers were encouraged to use “a process-oriented approach” to teach a range of genres but were not given extensive pedagogical support or the resources to provide a great deal of “individualized feedback” (105).

The study reported in this article was part of a larger study; for the relevant activities, students first completed scales to measure their beliefs about writing-skill malleability and to assess their achievement goals. They were then given one of two prompts for “an opinion essay” on whether students should have daily homework or extra curricular activities (106). After the prompts were provided, students filled out a sixteen-item measure of self-efficacy for conventions, ideation, and self-regulation. A three-minute opportunity to brainstorm about their responses to the prompts followed; students then wrote a five-minute “essay,” which was assessed as a measure of performance by graduate research assistants who had been trained to use a “holistic rating rubric.” Student essays were typed and mechanical errors corrected. The authors contend that the use of such five-minute tasks has been shown to be valid (107).

The researchers predicted that they would see correlations between malleability beliefs and performance; they expected to see beliefs affect goals, which would affect self-efficacy, and lead to differences in performance (115). They found these associations for mastery goals. Students who saw writing as an incremental, improvable skill displayed “a greater orientation toward mastery goals” (115). The authors state that this result for writing had not been previously demonstrated. Their research reveals that “mastery goals contributed to students’ confidence” and therefore to self-efficacy, perhaps because students with this belief “ actively strive” for success (115).

They note, however, that prior research correlated these results with self-efficacy for conventions, whereas their study showed that self-efficacy for self-regulation, students’ belief that “they can take control of their own writing,” was the more important contributor to performance (116); in fact, it was “the only variable directly influencing writing performance” (116). Limpo and Alves hypothesize that conventions appeared less central in their study because the essays had been typed and corrected, so that errors had less effect on performance scores (116).

Data on the relationship between malleability beliefs and performance-approach or performance-avoidance goals, the goals associated with success in relation to others, were “less clear-cut” (117). Students who saw skills as fixed tended toward performance-avoidance, but neither type of performance goal affected self-efficacy.

Limpo and Alves recount an unexpected finding that the choice of performance-avoidance goals did not affect performance scores on the essays (117). The authors hypothesize that the low-stakes nature of the task and its simplicity did not elicit “the self-protective responses” that often hinder writers who tend toward these avoidance goals (117). These unclear results lead Limpo and Alves to withhold judgment about the relationship among these two kinds of goals, self-efficacy, and performance, positing that other factors not captured in the study might be involved (117-18).

They recommend more extensive research with more complex writing tasks and environments, including longitudinal studies and consideration of such factors as “past performance” and gender (118). They encourage instructors to foster a view of writing as an incremental skill and to emphasize self-regulation strategies. They recommend “The Self-Regulated Strategy Development model” as “one of the most effective instructional models for teaching writing” (119).


Leave a comment

Bastian, Heather. Affect and “Bringing the Funk” to First-Year Writing. CCC, Sept. 2017. Posted 10/05/2017.

Bastian, Heather. “Student Affective Responses to ‘Bringing the Funk’ in the First-Year Writing Classroom.” College Composition and Communication 69.1 (2017): 6-34. Print.

Heather Bastian reports a study of students’ affective responses to innovative assignments in a first-year writing classroom. Building on Adam Banks’s 2015 CCCC Chair’s Address, Bastian explores the challenges instructors may face when doing what Banks called “bring[ing] the funk” (qtd. in Bastian 6) by asking students to work in genres that do not conform to “academic convention” (7).

According to Bastian, the impetus for designing such units and assignments includes the need to “prepare students for uncertain futures within an increasingly technological world” (8). Bastian cites scholarship noting teachers’ inability to forecast exactly what will be demanded of students as they move into professions; this uncertainty, in this view, means that the idea of what constitutes writing must be expanded and students should develop the rhetorical flexibility to adapt to the new genres they may encounter (8).

Moreover, Bastian argues, citing Mary Jo Reiff and Anis Bawarshi, that students’ dependence on familiar academic formulas means that their responses to rhetorical situations can become automatic and unthinking, with the result that they do not question the potential effects of their choices or explore other possible solutions to rhetorical problems. This automatic response limits “their meaning-making possibilities to what academic convention allows and privileges” (8-9)

Bastian contends that students not only fall back on traditional academic genres but also develop “deep attachments” to the forms they find familiar (9). The field, she states, has little data on what these attachments are like or how they guide students’ rhetorical decisions (9, 25).

She sees these attachments as a manifestation of “affect”; she cites Susan McLeod’s definition of affect as “noncognitive phenomena, including emotions but also attitudes, beliefs, moods, motivations, and intuitions” (9). Bastian cites further scholarship that indicates a strong connection between affect and writing as well as emotional states and learning (9-10). In her view, affect is particularly important when teachers design innovative classroom experiences because students’ affective response to such efforts can vary greatly; prior research suggests that as many as half the students in a given situation will resist moving beyond the expected curriculum (10).

Bastian enlisted ten of twenty-two students in a first-year-writing class at a large, public midwestern university in fall 2009 (11). She used “multiple qualitative research methods” to investigate these first-semester students’ reactions to the third unit in a four-unit curriculum intended to meet the program’s goals of “promot[ing] rhetorical flexibility and awareness”; the section under study explored genre from different perspectives (11). The unit introduced “the concept of genre critique,” as defined by the course textbook, Amy J. Devitt et al.’s Scenes of Writing: “questioning and evaluating to determine the strengths and shortcomings of a genre as well as its ideological import” (12).

Bastian designed the unit to “disrupt” students’ expectation of a writing class on the reading level, in that she presented her prompt as a set of “game rules,” and also on the “composing” level, as the unit did not specify what genre the students were to critique nor the form in which they were to do so (12). Students examined a range of genres and genre critiques, “including posters, songs, blogs, . . . artwork, poems, . . . comics, speeches, creative nonfiction. . . .” (13). The class developed a list of the possible forms their critiques might take.

Bastian acted as observer, recording evidence of “the students’ lived experiences” as they negotiated the unit. She attended all class sessions, made notes of “physical reactions” and “verbal reactions” (13). Further data consisted of one-hour individual interviews and a set of twenty-five questions. For this study, she concentrated on questions that asked about students’ levels of comfort with various stages of the unit (13).

Like other researchers, Bastian found that students asked to create innovative projects began with “confusion”; her students also displayed “distrust” (14) in that they were not certain that the assignment actually allowed them to choose their genres (19). All students considered “the essay” the typical genre for writing classes; some found the familiar conventions a source of confidence and comfort, while for others the sense of routine was “boring” (student, qtd. in Bastian 15).

Bastian found that the degree to which students expressed “an aversion” to the constraints of “academic convention” affected their responses to the assignment, particularly the kinds of genres they chose and their levels of comfort with the unusual assignment.

Those who said that they wanted more freedom in classroom writing chose what the students as a whole considered “atypical” genres for their critiques, such as recipes, advertisements, or magazine covers (16-17). Students who felt safer within the conventions preferred more “typical” choices such as PowerPoint presentations and business letters (16, 22). The students who picked atypical genres claimed that they appreciated the opportunity to experience “a lot more chance to express yourself” (student, qtd. in Bastian 22), and possibly discover “hidden talents” (22).

The author found, however, that even students who wanted more freedom did not begin the unit with high levels of comfort. She found that the unusual way the assignment was presented, the “concept of critique,” and the idea that they could pick their own genres concerned even the more adventurous students (18). In Bastian’s view, the “power of academic convention” produced a forceful emotional attachment: students “distrusted the idea that both textual innovation and academic convention is both valid and viable in the classroom” (20).

Extensive exposure to critiques and peer interaction reduced discomfort for all students by the end of the unit (19), but those who felt least safe outside the typical classroom experience reported less comfort (23). One student expressed a need to feel safe, yet, after seeing his classmates’ work, chose an atypical response, encouraging Bastian to suggest that with the right support, “students can be persuaded to take risks” (23).

Bastian draws on research suggesting that what Barry Kroll calls “intelligent confusion” (qtd. in Bastian 26) and “cognitive disequilibrium” can lead to learning if supported by appropriate activities (26). The students reported gains in a number of rhetorical dimensions and specifically cited the value of having to do something that made them uncomfortable (25). Bastian argues that writing teachers should not be surprised to encounter such resistance, and can prepare for it with four steps: ‘openly acknowledge and discuss” the discomfort students might feel; model innovation; design activities that translate confusion into learning; and allow choice (27-28). She urges more empirical research on the nature of students’ affective responses to writing instruction (29).

 


Leave a comment

Bailey & Bizzaro. Research in Creative Writing. August RTE. Posted 08/25/2017.

Bailey, Christine, and Patrick Bizzaro. “Research in Creative Writing: Theory into Practice.” Research in the Teaching of English 52.1 (2017): 77-97. Print.

Christine Bailey and Patrick Bizzaro discuss the disciplinarity of creative writing and its place in relation to the discipline of composition. They work to establish an aesthetic means of interpreting and representing data about creative writing in the belief that in order to emerge as a discipline its own right, creative writing must arrive at a set of shared values and understandings as to how research is conducted.

Bailey and Bizzaro’s concerns derive from their belief that creative writing must either establish itself as a discipline or it will be incorporated into composition studies (81). They contend that creative writing studies, like other emerging disciplines, must account for, in the words of Timothy J. San Pedro, “hierarchies of power” within institutions (qtd. in Bailey and Bizzaro 78) such that extant disciplines control or oppress less powerful disciplines, much as “teaching practices and the texts used in schools” oppress marginal student groups (78). A decision to use the methodologies of the “dominant knowledges” thus accedes to “imperial legacies” (San Pedro, qtd. in Bailey and Bizzaro 78).

Bailey and Bizzaro report that discussion of creative writing by compositionists such as Douglas Hesse and Wendy Bishop has tended to address how creative writing can be appropriately positioned as part of composition (79). Drawing on Bishop, the authors ascribe anxiety within some English departments over the role of creative writing to “genre-fear,” that is, “the belief that two disciplines cannot simultaneously occupy the same genre” (79).

They recount Bishop’s attempt to resolve the tension between creative writing studies and composition by including both under what she called a de facto “ready-made synthesis” that she characterized as the “study of writers writing” (qtd. in Bailey and Bizzaro 80). In the authors’ view, this attempt fails because the two fields differ substantially: “what one values as the basis for making knowledge differs from what the other values” (80).

The authors see creative writing studies itself as partially responsible for the difficulties the field has faced in establishing itself as a discipline (79, 80-81). They draw on Stephen Toulmin’s approach to disciplinarity: “a discipline exists ‘where men’s [sic] shared commitment to a sufficiently agreed set of ideals leads to the development of an isolable and self-defining repertory of procedures” (qtd. In Bailey and Bizzaro 80). The authors elaborate to contend that in a discipline, practitioners develop shared views as to what counts as knowledge and similarly shared views about the most appropriate means of gathering and reporting that knowledge (80).

Creative writing studies, they contend, has not yet acted on these criteria (81). Rather, they state, creative writers seem to eschew empirical research in favor of “craft interviews” consisting of “writers’ self-reports”; meanwhile, compositionists have undertaken to fill the gap by applying research methodologies appropriate to composition but not to creative writing (81). The authors’ purpose, in this article, is to model a research methodology that they consider more in keeping with the effort to define and apply the specific values accruing to creative writing.

The methodology they advance involves gathering, interpreting, and representing aesthetic works via an aesthetic form, in this case, the novel. Students in nine sections of first-year-writing classes in spring and fall 2013 responded to a “creative-narrative” prompt: “How did you come to this place in your life? Tell me your story” (84). Students were asked to respond with “a creative piece such as a poem, screenplay, or graphic novel” (84). All students were invited to participate with the understanding that their work would be confidential and might be represented in published research that might take on an alternative form such as a novel; the work of students who signed consent forms was duplicated and analyzed (84-85).

Data ultimately consisted of 57 artifacts, 55 of which were poems (85). Coding drew on the work of scholars like K. M. Powell, Elspeth Probyn, and Roz Ivanič to examine students’ constructions of self through the creative-narrative process, and on that of James E. Seitz to consider how students’ use of metaphor created meaning (85, 86). Further coding was based on Kara P. Alexander’s 2011 study of literacy narratives (86).

This analysis was combined with the results of a demographic survey to generate six groups revolving around “[c]ommon threads” in the data (86); “personas” revealed through the coded characteristics divided students into those who, for example, “had a solid identity in religion”; “were spiritually lost”; were “uncertain of identity [and] desiring change”; were “reclusive” with “strong family ties”; were interested in themes of “redemption or reformation”; or “had lived in multiple cultures” (86). This list, the authors state, corresponds to “a standard analysis” that they contrast with their alternative creative presentation (86).

In their methodology, Bailey and Bizzaro translate the “composites” identified by the descriptors into six characters for a young-adult novel Bailey developed (88). Drawing on specific poems by students who fell into each composite as well as on shared traits that emerged from analysis of identity markers and imagery in the poems, the authors strove to balance the identities revealed through the composites with the individuality of the different students. They explore how the characters of “Liz” and “Emmy” are derived from the “data” provided by the poems (89-90), and offer an excerpt of the resulting novel (90-92).

They present examples of other scholars who have “used aesthetic expressions in the development of research methods” (88). Such methods include ethnography, a form of research that the authors consider “ultimately a means of interpretive writing” (93). Thus, in their view, creating a novel from the data presented in poems is a process of interpreting those data, and the novel is similar to the kind of “storytell[ing]” (93) in which ethnography gathers data, then uses it to represent, interpret, and preserve individuals and their larger cultures (92-93).

They continue to contend that embracing research methods that value aesthetic response is essential if creative writing is to establish itself as a discipline (93). These methodologies, they argue, can encourage teachers to both value aesthetic elements of student work and to use their own aesthetic responses to enhance teaching, particularly as these methods of gathering and representing data result in “aesthetic objects” that are “evocative, engage readers’ imaginations, and resonate with the world we share not only with our students but also with our colleagues in creative writing” (94). They argue that “when the ‘literariness’ of data reports [becomes] a consideration in the presentation of research,” composition and creative writing will have achieved “an equitable relationship in writing studies” (95).

 


Leave a comment

Gallagher, Chris W. Behaviorism as Social-Process Pedagogy. Dec. CCC. Posted 01/12/2017.

Gallagher, Chris W. “What Writers Do: Behaviors, Behaviorism, and Writing Studies.” College Composition and Communication 68.2 (2016): 238-65. Web. 12 Dec. 2016.

Chris W. Gallagher provides a history of composition’s relationship with behaviorism, arguing that this relationship is more complex than commonly supposed and that writing scholars can use the connections to respond to current pressures imposed by reformist models.

Gallagher notes the efforts of many writing program administrators (WPAs) to articulate professionally informed writing outcomes to audiences in other university venues, such as general-education committees (238-39). He reports that such discussions often move quickly from compositionists’ focus on what helps students “writ[e] well” to an abstract and universal ideal of “good writing” (239).

This shift, in Gallagher’s view, encourages writing professionals to get caught up in “the work texts do” in contrast to the more important focus on “the work writers do” (239; emphasis original). He maintains that “the work writers do” is in fact an issue of behaviors writers exhibit and practice, and that the resistance to “behaviorism” that characterizes the field encourages scholars to lose sight of the fact that the field is “in the behavior business; we are, and should be, centrally concerned with what writers do” (240; emphasis original).

He suggests that “John Watson’s behavioral ‘manifesto’—his 1913 paper, ‘Psychology as the Behaviorist Views It’” (241) captures what Gallagher sees as the “general consensus” of the time and a defining motivation for behaviorism: a shift away from “fuzzy-headed . . . introspective analysis” to the more productive process of “study[ing] observable behaviors” (241). Gallagher characterizes many different types of behaviorism, ranging from those designed to actually control behavior to those hoping to understand “inner states” through their observable manifestations (242).

One such productive model of behaviorism, in Gallagher’s view, is that of B. F. Skinner in the 1960s and 1970s. Gallagher argues that Skinner emphasized not “reflex behaviors” like those associated with Pavlov but rather “operant behaviors,” which Gallagher, citing psychologist John Staddon, characterizes as concerned with “the ways in which human (and other animal) behavior operates in its environment and is guided by its consequences” (242).

Gallagher contends that composition’s resistance to work like Skinner’s was influenced by views like that of James A. Berlin, for whom behaviorism was aligned with “current-traditional rhetoric” because it was deemed an “objective rhetoric” that assumed that writing was merely the process of conveying an external reality (243). The “epistemic” focus and “social turn” that emerged in the 1980s, Gallagher writes, generated resistance to “individualism and empiricism” in general, leading to numerous critiques of what were seen as behaviorist impulses.

Gallagher attributes much tension over behaviorism in composition to the influx of government funding in the 1960s designed to “promote social efficiency through strategic planning and accountability” (248). At the same time that this funding rewarded technocratic expertise, composition focused on “burgeoning liberation movements”; in Gallagher’s view, behaviorism erred by falling on the “wrong” or “science side” of this divide (244). Gallagher chronicles efforts by the National Council of Teachers of English and various scholars to arrive at a “détente” that could embrace forms of accountability fueled by behaviorism, such as “behavioral objectives” (248), while allowing the field to “hold on to its humanist core” (249).

In Gallagher’s view, scholars who struggled to address behaviorism such as Lynn Z. and Martin Bloom moved beyond mechanistic models of learning to advocate many features of effective teaching recognized today, such as a resistance to error-oriented pedagogy, attention to process, purposes, and audiences, and provision of “regular, timely feedback” (245-46). Negative depictions of behaviorism, Gallagher argues, in fact neglect the degree to which, in such scholarship, behaviorism becomes “a social-process pedagogy” (244; emphasis original).

In particular, Gallagher argues that “the most controversial behaviorist figure in composition history,” Robert Zoellner (246), has been underappreciated. According to Gallagher, Zoellner’s “talk-write” pedagogy was a corrective for “think-write” models that assumed that writing merely conveyed thought, ignoring the possibility that writing and thinking could inform each other (246). Zoellner rejected reflex-driven behaviorism that predetermined stimulus-response patterns, opting instead for an operant model in which objectives followed from rather than controlled students’ behaviors, which should be “feely emitted” (Zoellner, qtd. in Gallagher 250) and should emerge from “transactional” relationships among teachers and students in a “collaborative,” lab-like setting in which teachers interacted with students and modeled writing processes (247).

The goal, according to Gallagher, was consistently to “help students develop robust repertoires of writing behaviors to help them adapt to the different writing situations in which they would find themselves” (247). Gallagher contends that Zoellner advocated teaching environments in which

[behavioral objectives] are not codified before the pedagogical interaction; . . . are rooted in the transactional relationship between teachers and students; . . . are not required to be quantifiably measurable; and . . . operate in a humanist idiom. (251).

Rejected in what Martin Nystrand denoted “the social 1980s” (qtd. in Gallagher 251), as funding for accountability initiatives withered (249), behaviorism did attract the attention of Mike Rose. His chapter in Why Writers Can’t Write and that of psychology professor Robert Boice attended to the ways in which writers relied on specific behaviors to overcome writer’s block; in Gallagher’s view, Rose’s understanding of the short-comings of overzealous behaviorism did not prevent him from taking “writers’ behaviors qua behaviors extremely seriously” (253).

The 1990s, Gallagher reports, witnessed a moderate revival of interest in Zoellner, who became one of the “unheard voices” featured in new histories of the field (254). Writers of these histories, however, struggled to dodge behaviorism itself, hoping to develop an empiricism that would not insist on “universal laws and objective truth claims” (255). After these efforts, however, Gallagher reports that the term faded from view, re-emerging only recently in Maja Joiwind Wilson’s 2013 dissertation as a “repressive” methodology exercised as a form of power (255).

In contrast to these views, Gallagher argues that “behavior should become a key term in our field” (257). Current pressures to articulate ways of understanding learning that will resonate with reformers and those who want to impose rigid measurements, he contends, require a vocabulary that foregrounds what writers actually do and frames the role of teachers as “help[ing] students expand their behavioral repertoires” (258; emphasis original). This vocabulary should emphasize the social aspects of all behaviors, thereby foregrounding the fluid, dynamic nature of learning.

In his view, such a vocabulary would move scholars beyond insisting that writing and learning “operate on a higher plane than that of mere behaviors”; instead, it would generate “better ways of thinking and talking about writing and learning behaviors” (257; emphasis original). He recommends, for example, creating “learning goals” instead of “outcomes” because such a shift discourages efforts to reduce complex activities to pre-determined, reflex-driven steps toward a static result (256). Scholars accustomed to a vocabulary of “processes, practices, and activities” can benefit from learning as well to discuss “specific, embodied, scribal behaviors” and the environments necessary if the benefits accruing to these behaviors are to be realized (258).

 


Leave a comment

Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


4 Comments

Grouling and Grutsch McKinney. Multimodality in Writing Center Texts. C&C, in press, 2016. Posted 08/21/2016.

Grouling, Jennifer, and Grutsch McKinney, Jackie. “Taking Stock: Multimodality in Writing Center Users’ Texts.” (In press.) Computers and Composition (2016). http://dx.doi.org/10.1016/j.compcom.2016.04.003 Web. 12 Aug. 2016.

Jennifer Grouling and Jackie Grutsch McKinney note that the need for multimodal instruction has been accepted for more than a decade by composition scholars (1). But they argue that the scholarship supporting multimodality as “necessary and appropriate” in classrooms and writing centers has tended to be “of the evangelical vein” consisting of “think pieces” rather than actual studies of how multimodality figures in classroom practice (2).

They present a study of multimodality in their own program at Ball State University as a step toward research that explores what kinds of multimodal writing takes place in composition classrooms (2). Ball State, they report, can shed light on this question because “there has been programmatic and curricular support here [at Ball State] for multimodal composition for nearly a decade now” (2).

The researchers focus on texts presented to the writing center for feedback. They ask three specific questions:

Are collected texts from writing center users multimodal?

What modes do students use in creation of their texts?

Do students call their texts multimodal? (2)

For two weeks in the spring semester, 2014, writing center tutors asked students visiting the center to allow their papers to be included in the study. Eighty-one of 214 students agreed. Identifying information was removed and the papers stored in a digital folder (3).

During those two weeks as well as the next five weeks, all student visitors to the center were asked directly if their projects were multimodal. Students could respond “yes,” “no,” or “not sure” (3). The purpose of this extended inquiry was to ensure that responses to the question during the first two “collection” weeks were not in some way unrepresentative. Grouling and Grutsch McKinney note that the question could be answered online or in person; students were not provided with a definition of “multimodal” even if they expressed confusion but only told to “answer as best they could” (3).

The authors decided against basing their study on the argument advanced by scholars like Jody Shipka and Paul Prior that “all communication practices have multimodal components” because such a definition did not allow them to see the distinctions they were investigating (3). Definitions like those presented by Tracey Bowen and Carl Whithaus that emphasize the “conscious” use of certain components also proved less helpful because students were not interviewed and their conscious intent could not be accessed (3). However, Bowen and Whithaus also offered a “more succinct definition” that proved useful: “multimodality is the ‘designing and composing beyond written words'” (qtd. in Grouling and Grutsch McKinney 3).

Examination of the papers led the researchers to code for a “continuum” of multimodality rather than a present/not-present binary (3-4). Fifty-seven, or 74%, of the papers were composed only in words and were coded as zero or “monomodal” (4). Some papers occupied a “grey area” because of elements like bulleted lists and tables. The researchers coded texts using bullets as “1” and those using lists and tables “2.” These categories shared the designation “elements of graphic design”; 19.8%, or 16, papers met this designation. Codes “3” and “4” indicated one or more modes beyond text and thus indicated “multimodal” work. No paper received a “4”; only eight, or 9.9%, received a “3,” indicating inclusion of one mode beyond words (4). Thus, the study materials exhibited little use of multimodal elements (4).

In answer to the second question, findings indicated that modes used even by papers coded “3” included only charts, graphs, and images. None used audio, video, or animation (4). Grouling and Grutsch McKinney posit that the multimodal elements were possibly not “created by the student” and that the instructor or template may have prompted the inclusion of such materials (5).

They further report that they could not tell whether any student had “consciously manipulated” elements of the text to make it multimodal (5). They observe that in two cases, students used visual elements apparently intended to aid in development of a paper in progress (5).

The “short answer” to the third research question, whether students saw their papers as multimodal, was “not usually” (5; emphasis original). Only 6% of 637 appointments and 6% of writers of the 81 collected texts answered yes. In only one case in which the student identified the paper as multimodal did the coders agree. Two of the five texts called multimodal by students received a code of 0 from the raters (5). Students were more able to recognize when their work was not multimodal; 51 of 70 texts coded by the raters as monomodal were also recognized as such by their authors (5).

Grouling and Grutsch McKinney express concern that students seem unable to identify multimodality given that such work is required in both first-year courses, and even taking transfer students into account, the authors note that “the vast majority” of undergraduates will have taken a relevant course (6). They state that they would be less concerned that students do not use the term if the work produced exhibited multimodal features, but this was not the case (6).

University system data indicated that a plurality of writing center attendees came from writing classes, but students from other courses produced some of the few multimodal pieces, though they did not use the term (7).

Examining program practices, Grouling and Grutsch McKinney determined that often only one assignment was designated “multimodal”—most commonly, presentations using PowerPoint (8). The authors advocate for “more open” assignments that present multimodality “as a rhetorical choice, and not as a requirement for an assignment” (8). Such emphasis should be accompanied by “programmatic assessment” to determine what students are actually learning (8-9).

The authors also urge more communication across the curriculum about the use of multiple modes in discipline-specific writing. While noting that advanced coursework in a discipline may have its own vocabulary and favored modes, Grouling and Grutsch McKinney argue that sharing the vocabulary from composition studies with faculty across disciplines will help students see how concepts from first-year writing apply in their coursework and professional careers (9).

The authors contend that instructors and tutors should attend to “graphic design elements” like “readability and layout” (10). In all cases, they argue, students should move beyond simply inserting illustrations into text to a better “integration” of modes to enhance communication (10). Further, incorporating multimodal concepts in invention and composing can enrich students’ understanding of the writing process (10). Such developments, the authors propose, can move the commitment to multimodality beyond the “evangelical phase” (11).

 


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).