College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Gallagher, Chris W. Behaviorism as Social-Process Pedagogy. Dec. CCC. Posted 01/12/2017.

Gallagher, Chris W. “What Writers Do: Behaviors, Behaviorism, and Writing Studies.” College Composition and Communication 68.2 (2016): 238-65. Web. 12 Dec. 2016.

Chris W. Gallagher provides a history of composition’s relationship with behaviorism, arguing that this relationship is more complex than commonly supposed and that writing scholars can use the connections to respond to current pressures imposed by reformist models.

Gallagher notes the efforts of many writing program administrators (WPAs) to articulate professionally informed writing outcomes to audiences in other university venues, such as general-education committees (238-39). He reports that such discussions often move quickly from compositionists’ focus on what helps students “writ[e] well” to an abstract and universal ideal of “good writing” (239).

This shift, in Gallagher’s view, encourages writing professionals to get caught up in “the work texts do” in contrast to the more important focus on “the work writers do” (239; emphasis original). He maintains that “the work writers do” is in fact an issue of behaviors writers exhibit and practice, and that the resistance to “behaviorism” that characterizes the field encourages scholars to lose sight of the fact that the field is “in the behavior business; we are, and should be, centrally concerned with what writers do” (240; emphasis original).

He suggests that “John Watson’s behavioral ‘manifesto’—his 1913 paper, ‘Psychology as the Behaviorist Views It’” (241) captures what Gallagher sees as the “general consensus” of the time and a defining motivation for behaviorism: a shift away from “fuzzy-headed . . . introspective analysis” to the more productive process of “study[ing] observable behaviors” (241). Gallagher characterizes many different types of behaviorism, ranging from those designed to actually control behavior to those hoping to understand “inner states” through their observable manifestations (242).

One such productive model of behaviorism, in Gallagher’s view, is that of B. F. Skinner in the 1960s and 1970s. Gallagher argues that Skinner emphasized not “reflex behaviors” like those associated with Pavlov but rather “operant behaviors,” which Gallagher, citing psychologist John Staddon, characterizes as concerned with “the ways in which human (and other animal) behavior operates in its environment and is guided by its consequences” (242).

Gallagher contends that composition’s resistance to work like Skinner’s was influenced by views like that of James A. Berlin, for whom behaviorism was aligned with “current-traditional rhetoric” because it was deemed an “objective rhetoric” that assumed that writing was merely the process of conveying an external reality (243). The “epistemic” focus and “social turn” that emerged in the 1980s, Gallagher writes, generated resistance to “individualism and empiricism” in general, leading to numerous critiques of what were seen as behaviorist impulses.

Gallagher attributes much tension over behaviorism in composition to the influx of government funding in the 1960s designed to “promote social efficiency through strategic planning and accountability” (248). At the same time that this funding rewarded technocratic expertise, composition focused on “burgeoning liberation movements”; in Gallagher’s view, behaviorism erred by falling on the “wrong” or “science side” of this divide (244). Gallagher chronicles efforts by the National Council of Teachers of English and various scholars to arrive at a “détente” that could embrace forms of accountability fueled by behaviorism, such as “behavioral objectives” (248), while allowing the field to “hold on to its humanist core” (249).

In Gallagher’s view, scholars who struggled to address behaviorism such as Lynn Z. and Martin Bloom moved beyond mechanistic models of learning to advocate many features of effective teaching recognized today, such as a resistance to error-oriented pedagogy, attention to process, purposes, and audiences, and provision of “regular, timely feedback” (245-46). Negative depictions of behaviorism, Gallagher argues, in fact neglect the degree to which, in such scholarship, behaviorism becomes “a social-process pedagogy” (244; emphasis original).

In particular, Gallagher argues that “the most controversial behaviorist figure in composition history,” Robert Zoellner (246), has been underappreciated. According to Gallagher, Zoellner’s “talk-write” pedagogy was a corrective for “think-write” models that assumed that writing merely conveyed thought, ignoring the possibility that writing and thinking could inform each other (246). Zoellner rejected reflex-driven behaviorism that predetermined stimulus-response patterns, opting instead for an operant model in which objectives followed from rather than controlled students’ behaviors, which should be “feely emitted” (Zoellner, qtd. in Gallagher 250) and should emerge from “transactional” relationships among teachers and students in a “collaborative,” lab-like setting in which teachers interacted with students and modeled writing processes (247).

The goal, according to Gallagher, was consistently to “help students develop robust repertoires of writing behaviors to help them adapt to the different writing situations in which they would find themselves” (247). Gallagher contends that Zoellner advocated teaching environments in which

[behavioral objectives] are not codified before the pedagogical interaction; . . . are rooted in the transactional relationship between teachers and students; . . . are not required to be quantifiably measurable; and . . . operate in a humanist idiom. (251).

Rejected in what Martin Nystrand denoted “the social 1980s” (qtd. in Gallagher 251), as funding for accountability initiatives withered (249), behaviorism did attract the attention of Mike Rose. His chapter in Why Writers Can’t Write and that of psychology professor Robert Boice attended to the ways in which writers relied on specific behaviors to overcome writer’s block; in Gallagher’s view, Rose’s understanding of the short-comings of overzealous behaviorism did not prevent him from taking “writers’ behaviors qua behaviors extremely seriously” (253).

The 1990s, Gallagher reports, witnessed a moderate revival of interest in Zoellner, who became one of the “unheard voices” featured in new histories of the field (254). Writers of these histories, however, struggled to dodge behaviorism itself, hoping to develop an empiricism that would not insist on “universal laws and objective truth claims” (255). After these efforts, however, Gallagher reports that the term faded from view, re-emerging only recently in Maja Joiwind Wilson’s 2013 dissertation as a “repressive” methodology exercised as a form of power (255).

In contrast to these views, Gallagher argues that “behavior should become a key term in our field” (257). Current pressures to articulate ways of understanding learning that will resonate with reformers and those who want to impose rigid measurements, he contends, require a vocabulary that foregrounds what writers actually do and frames the role of teachers as “help[ing] students expand their behavioral repertoires” (258; emphasis original). This vocabulary should emphasize the social aspects of all behaviors, thereby foregrounding the fluid, dynamic nature of learning.

In his view, such a vocabulary would move scholars beyond insisting that writing and learning “operate on a higher plane than that of mere behaviors”; instead, it would generate “better ways of thinking and talking about writing and learning behaviors” (257; emphasis original). He recommends, for example, creating “learning goals” instead of “outcomes” because such a shift discourages efforts to reduce complex activities to pre-determined, reflex-driven steps toward a static result (256). Scholars accustomed to a vocabulary of “processes, practices, and activities” can benefit from learning as well to discuss “specific, embodied, scribal behaviors” and the environments necessary if the benefits accruing to these behaviors are to be realized (258).

 


Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


1 Comment

West-Puckett, Stephanie. Digital Badging as Participatory Assessment. CE, Nov. 2016. Posted 11/17/2016.

Stephanie West-Puckett presents a case study of the use of “digital badges” to create a local, contextualized, and participatory assessment process that works toward social justice in the writing classroom.

She notes that digital badges are graphic versions of those earned by scouts or worn by members of military groups to signal “achievement, experience, or affiliation in particular communities” (130). Her project, begun in Fall 2014, grew out of Mozilla’s free Open Badging Initiative and the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC) that funded grants to four universities as well as to museums, libraries, and community partnerships to develop badging as a way of recognizing learning (131).

West-Puckett employed badges as a way of encouraging and assessing student engagement in the outcomes and habits of mind included in such documents as the Framework for Success in Postsecondary Writing, the Outcomes Statements for First-Year Composition produced by the Council of Writing Program Administrators, and her own institution’s outcomes statement (137). Her primary goal is to foster a “participatory” process that foregrounds the agency of teachers and students and recognizes the ways in which assessment can influence classroom practice. She argues that such participation in designing and interpreting assessments can address the degree to which assessment can drive bias and limit access and agency for specific groups of learners (129).

She reviews composition scholarship characterizing most assessments as “top-down” (127-28). In these practices, West-Puckett argues, instruments such as rubrics become “fetishized,” with the result that they are forced upon contexts to which they are not relevant, thus constraining the kinds of assignments and outcomes teachers can promote (134). Moreover, assessments often fail to encourage students to explore a range of literacies and do not acknowledge learners’ achievements within those literacies (130). More valid, for West-Puckett, are “hyperlocal” assessments designed to help teachers understand how students are responding to specific learning opportunities (134). Allowing students to join in designing and implementing assessments makes the learning goals visible and shared while limiting the power of assessment tools to marginalize particular literacies and populations (128).

West-Puckett contends that the multimodal focus in writing instruction exacerbates the need for new modes of assessment. She argues that digital badges partake of “the primacy of visual modes of communication,” especially for populations “whose bodies were not invited into the inner sanctum of a numerical and linguistic academy” (132). Her use of badges contributes to a form of assessment that is designed not to deride writing that does not meet the “ideal text” of an authority but rather to enlist students’ interests and values in “a dialogic engagement about what matters in writing” (133).

West-Puckett argues for pairing digital badging with “critical validity inquiry,” in which the impact of an assessment process is examined through a range of theoretical frames, such as feminism, Marxism, or queer or disability theory (134). This inquiry reveals assessment’s role in sustaining or potentially disrupting entrenched views of what constitutes acceptable writing by examining how such views confer power on particular practices (134-35).

In West-Puckett’s classroom in a “mid-size, rural university in the south” with a high percentage of students of color and first-generation college students (135), small groups of students chose outcomes from the various outcomes statements, developed “visual symbols” for the badges, created a description of the components and value of the outcomes for writing, and detailed the “evidence” that applicants could present from a range of literacy practices to earn the badges (137). West-Puckett hoped that this process would decrease the “disconnect” between her understanding of the outcomes and that of students (136), as well as engage students in a process that takes into account the “lived consequences of assessment” (141): its disparate impact on specific groups.

The case study examines several examples of badges, such as one using a compass to represent “rhetorical knowledge” (138). The group generated multimodal presentations, and applicants could present evidence in a range of forms, including work done outside of the classroom (138-39). The students in the group decided whether or not to award the badge.

West-Puckett details the degree to which the process invited “lively discussion” by examining the “Editing MVP” badge (139). Students defined editing as proofreading and correcting one’s own paper but visually depicted two people working together. The group refused the badge to a student of color because of grammatical errors but awarded it to another student who argued for the value of using non-standard dialogue to show people “‘speaking real’ to each other” (qtd. in West-Puckett 140). West-Puckett recounts the classroom discussion of whether editing could be a collaborative effort and when and in what contexts correctness matters (140).

In Fall 2015, West-Puckett implemented “Digital Badging 2.0” in response to her concerns about “the limited construct of good writing some students clung to” as well as how to develop “badging economies that asserted [her] own expertise as a writing instructor while honoring the experiences, viewpoints, and subject positions of student writers” (142). She created two kinds of badging activities, one carried out by students as before, the other for her own assessment purposes. Students had to earn all the student-generated badges in order to pass, and a given number of West-Puckett’s “Project Badges” to earn particular grades (143). She states that she privileges “engagement as opposed to competency or mastery” (143). She maintains that this dual process, in which her decision-making process is shared with the students who are simultaneously grappling with the concepts, invites dialogue while allowing her to consider a wide range of rhetorical contexts and literacy practices over time (144).

West-Puckett reports that although she found evidence that the badging component did provide students an opportunity to take more control of their learning, as a whole the classes did not “enjoy” badging (145). They expressed concern about the extra work, the lack of traditional grades, and the responsibility involved in meeting the project’s demands (145). However, in disaggregated responses, students of color and lower-income students viewed the badge component favorably (145). According to West-Puckett, other scholars have similarly found that students in these groups value “alternative assessment models” (146).

West-Puckett lays out seven principles that she believes should guide participatory assessment, foregrounding the importance of making the processes “open and accessible to learners” in ways that “allow learners to accept or refuse particular identities that are constructed through the assessment” (147). In addition, “[a]ssessment artifacts,” in this case badges, should be “portable” so that students can use them beyond the classroom to demonstrate learning (148). She presents badges as an assessment tool that can embody these principles.


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).


Zuidema and Fredricksen. Preservice Teachers’ Use of Resources. August RTE. Posted 09/25/2016.

Zuidema, Leah A., and James E. Fredricksen. “Resources Preservice Teachers Use to Think about Student Writing.” Research in the Teaching of English 51.1 (2016): 12-36. Print.

Leah A. Zuidema and James E. Fredricksen document the resources used by students in teacher-preparation programs. The study examined transcripts collected from VoiceThread discussions among 34 preservice teachers (PSTs) (16). The PSTs reviewed and discussed papers provided by eighth- and ninth-grade students in Idaho and Indiana (18).

Zuidema and Fredricksen define “resource” as “an aid or source of evidence used to help support claims; an available supply that can be drawn upon when needed” (15). They intend their study to move beyond determining what writing teachers “get taught” to discovering what kinds of resources PSTs actually use in developing their theories and practices for K-12 writing classrooms (13-14).

The literature review suggests that the wide range of concepts and practices presented in teacher-preparation programs varies depending on local conditions and is often augmented by students’ own educational experiences (14). The authors find very little systematic study of how beginning teachers actually draw on the methods and concepts their training provides (13).

Zuidema and Fredricksen see their study as building on prior research by systematically identifying the resources teachers use and assigning them to broad categories to allow a more comprehensive understanding of how teachers use such sources to negotiate the complexities of teaching writing (15-16).

To gather data, the researchers developed a “community of practice” by building their methods courses around a collaborative project focusing on assessing writing across two different teacher-preparation programs (16-17). Twenty-six Boise State University PSTs and 8 from a small Christian college, Dordt, received monthly sets of papers from the eighth and ninth graders, which they then assessed individually and with others at their own institutions.

The PSTs then worked in groups through VoiceThread to respond to the papers in three “rounds,” first “categoriz[ing]” the papers according to strengths and weaknesses; then categorizing and prioritizing the criteria they relied on; and finally “suggest[ing] a pedagogical plan of action” (19). This protocol did not explicitly ask PSTs to name the resources they used but revealed these resources via the transcriptions (19).

The methods courses taught by Zuidema and Fredricksen included “conceptual tools” such as “guiding frameworks, principles, and heuristics,” as well as “practical tools” like “journal writing and writer’s workshop” (14). PSTs read professional sources and participated in activities that emphasized the value of sharing writing with students (17). Zuidema and Fredricksen contend that a community of practice in which professionals explain their reasoning as they assess student writing encourages PSTs to “think carefully about theory-practice connections” (18).

In coding the VoiceThread conversations, the researchers focused on “rhetorical approaches to composition” (19), characterized as attention to “arguments and claims . . . , evidence and warrants,” and “sources of support” (20). They found five categories of resources PSTs used to support claims about student writing:

  • Understanding of students and student writing (9% of instances)
  • Knowledge of the context (10%)
  • Colleagues (11%)
  • PSTs’ roles as writers, readers, and teachers (17%)
  • PSTs’ ideas and observations about writing (54%) (21)

In each case, Zuidema and Fredricksen developed subcategories. For example, “Understanding of students and student writing” included “Experience as a student writer” and “Imagining students and abilities,” while “Colleagues” consisted of “Small-group colleagues,” “More experienced teachers,” “Class discussion/activity,” and “Professional reading” (23).

Category 1, “Understanding of students and student writing,” was used “least often,” with PSTs referring to their own student-writing experiences only six times out of 435 recorded instances (24). The researchers suggest that this category might have been used more had the PSTs been able to interact with the students (24). They see “imagining” how students are reacting to assignments important as a “way [teachers] can develop empathy” and develop interest in how students understand writing (24).

Category 2, “Knowledge of Context as a Resource,” was also seldom used. Those who did refer to it tended to note issues involving what Zuidema and Fredricksen call GAPS: rhetorical awareness of “genre, audience, purpose, and situation of the writing” (25). Other PSTs noted the role of the prompt in inviting strong writing. The researchers believe these types of awarenesses encourage more sophisticated assessment of student work (25).

The researchers express surprise that Category 3, “Colleagues,” was used so seldom (26). Colleagues in the small groups were cited most often, but despite specific encouragement to do so, several groups did not draw on this resource. Zuidema and Fredricksen note that reference to the resource increased through the three rounds. Also surprising was the low rate of reference to mentors and experienced teachers, to class discussion, activities, and assignments: Only one participant mentioned a required “professional reading” as a resource (27). Noting that the PSTs may have used concepts from mentors and class assignments without explicitly naming them, the authors note prior research suggesting that reference to outside sources can be perceived as undercutting the authority conferred by experience (27).

In Category 4, “Roles as Resources,” Zuidema and Fredricksen note that PSTs were much more likely to draw on their roles as readers or teachers than as writers (28). Arguing that a reader perspective augured an awareness of the importance of audience, the researchers note that most PSTs in their study perceived their own individual reader responses as most pertinent, suggesting the need to emphasize varied perspectives readers might bring to a text (28).

Fifty-four percent of the PSTs references invoked “Writing as a Resource” (29). Included in this category were “imagined ideal writing,” “comparisons across student writing,” “holistic” references to “whole texts,” and “excerpts” (29-31). In these cases, PSTs’ uses of the resources ranged from “a rigid, unrhetorical view of writing” in which “rules” governed assessment (29) to a more effective practice that “connected [student writing] with a rhetorical framework” (29). For example, the use of excerpts could be used for “keeping score” on “checklists” or as a means of noting patterns and suggesting directions for teaching (31). Comparisons among students and expectations for other students at similar ages, Zuidema and Fredricksen suggest, allowed some PSTs to reflect on developmental issues, while holistic evaluation allowed consideration of tone, audience, and purpose (30).

Zuidema and Fredricksen conclude that in encouraging preservice teachers to draw on a wide range of resources, “exposure was not enough” (32), and “[m]ere use is not the goal” (33). Using their taxonomy as a teaching tool, they suggest, may help PSTs recognize the range of resources available to them and “scaffold their learning” (33) so that they will be able to make informed decisions when confronted with the multiple challenges inherent in today’s diverse and sometimes “impoverished” contexts for teaching writing (32).


4 Comments

Grouling and Grutsch McKinney. Multimodality in Writing Center Texts. C&C, in press, 2016. Posted 08/21/2016.

Grouling, Jennifer, and Grutsch McKinney, Jackie. “Taking Stock: Multimodality in Writing Center Users’ Texts.” (In press.) Computers and Composition (2016). http://dx.doi.org/10.1016/j.compcom.2016.04.003 Web. 12 Aug. 2016.

Jennifer Grouling and Jackie Grutsch McKinney note that the need for multimodal instruction has been accepted for more than a decade by composition scholars (1). But they argue that the scholarship supporting multimodality as “necessary and appropriate” in classrooms and writing centers has tended to be “of the evangelical vein” consisting of “think pieces” rather than actual studies of how multimodality figures in classroom practice (2).

They present a study of multimodality in their own program at Ball State University as a step toward research that explores what kinds of multimodal writing takes place in composition classrooms (2). Ball State, they report, can shed light on this question because “there has been programmatic and curricular support here [at Ball State] for multimodal composition for nearly a decade now” (2).

The researchers focus on texts presented to the writing center for feedback. They ask three specific questions:

Are collected texts from writing center users multimodal?

What modes do students use in creation of their texts?

Do students call their texts multimodal? (2)

For two weeks in the spring semester, 2014, writing center tutors asked students visiting the center to allow their papers to be included in the study. Eighty-one of 214 students agreed. Identifying information was removed and the papers stored in a digital folder (3).

During those two weeks as well as the next five weeks, all student visitors to the center were asked directly if their projects were multimodal. Students could respond “yes,” “no,” or “not sure” (3). The purpose of this extended inquiry was to ensure that responses to the question during the first two “collection” weeks were not in some way unrepresentative. Grouling and Grutsch McKinney note that the question could be answered online or in person; students were not provided with a definition of “multimodal” even if they expressed confusion but only told to “answer as best they could” (3).

The authors decided against basing their study on the argument advanced by scholars like Jody Shipka and Paul Prior that “all communication practices have multimodal components” because such a definition did not allow them to see the distinctions they were investigating (3). Definitions like those presented by Tracey Bowen and Carl Whithaus that emphasize the “conscious” use of certain components also proved less helpful because students were not interviewed and their conscious intent could not be accessed (3). However, Bowen and Whithaus also offered a “more succinct definition” that proved useful: “multimodality is the ‘designing and composing beyond written words'” (qtd. in Grouling and Grutsch McKinney 3).

Examination of the papers led the researchers to code for a “continuum” of multimodality rather than a present/not-present binary (3-4). Fifty-seven, or 74%, of the papers were composed only in words and were coded as zero or “monomodal” (4). Some papers occupied a “grey area” because of elements like bulleted lists and tables. The researchers coded texts using bullets as “1” and those using lists and tables “2.” These categories shared the designation “elements of graphic design”; 19.8%, or 16, papers met this designation. Codes “3” and “4” indicated one or more modes beyond text and thus indicated “multimodal” work. No paper received a “4”; only eight, or 9.9%, received a “3,” indicating inclusion of one mode beyond words (4). Thus, the study materials exhibited little use of multimodal elements (4).

In answer to the second question, findings indicated that modes used even by papers coded “3” included only charts, graphs, and images. None used audio, video, or animation (4). Grouling and Grutsch McKinney posit that the multimodal elements were possibly not “created by the student” and that the instructor or template may have prompted the inclusion of such materials (5).

They further report that they could not tell whether any student had “consciously manipulated” elements of the text to make it multimodal (5). They observe that in two cases, students used visual elements apparently intended to aid in development of a paper in progress (5).

The “short answer” to the third research question, whether students saw their papers as multimodal, was “not usually” (5; emphasis original). Only 6% of 637 appointments and 6% of writers of the 81 collected texts answered yes. In only one case in which the student identified the paper as multimodal did the coders agree. Two of the five texts called multimodal by students received a code of 0 from the raters (5). Students were more able to recognize when their work was not multimodal; 51 of 70 texts coded by the raters as monomodal were also recognized as such by their authors (5).

Grouling and Grutsch McKinney express concern that students seem unable to identify multimodality given that such work is required in both first-year courses, and even taking transfer students into account, the authors note that “the vast majority” of undergraduates will have taken a relevant course (6). They state that they would be less concerned that students do not use the term if the work produced exhibited multimodal features, but this was not the case (6).

University system data indicated that a plurality of writing center attendees came from writing classes, but students from other courses produced some of the few multimodal pieces, though they did not use the term (7).

Examining program practices, Grouling and Grutsch McKinney determined that often only one assignment was designated “multimodal”—most commonly, presentations using PowerPoint (8). The authors advocate for “more open” assignments that present multimodality “as a rhetorical choice, and not as a requirement for an assignment” (8). Such emphasis should be accompanied by “programmatic assessment” to determine what students are actually learning (8-9).

The authors also urge more communication across the curriculum about the use of multiple modes in discipline-specific writing. While noting that advanced coursework in a discipline may have its own vocabulary and favored modes, Grouling and Grutsch McKinney argue that sharing the vocabulary from composition studies with faculty across disciplines will help students see how concepts from first-year writing apply in their coursework and professional careers (9).

The authors contend that instructors and tutors should attend to “graphic design elements” like “readability and layout” (10). In all cases, they argue, students should move beyond simply inserting illustrations into text to a better “integration” of modes to enhance communication (10). Further, incorporating multimodal concepts in invention and composing can enrich students’ understanding of the writing process (10). Such developments, the authors propose, can move the commitment to multimodality beyond the “evangelical phase” (11).

 


1 Comment

Anson, Chris M. Expert Writers and Genre Transfer. CCC, June 2016. Posted 07/09/2016.

Anson, Chris M. “The Pop Warner Chronicles: A Case Study in Contextual Adaptation and the Transfer of Writing Ability.” College Composition and Communication 67.4 (2016): 518-49. Print.

Chris Anson presents a case study of an expert writer, “Martin,” attempting to “transfer” his extensive writing experience to the production of seventy-five-word “game summaries” for his son’s Pop Warner football team. The study leads Anson to argue that current theory on transfer does not fully account for Martin’s experiences working in a new genre and advocates for a “more nuanced understanding of existing ability, disposition, context, and genre in the deployment of knowledge for writing” (520).

Martin wrote the summaries to fulfill a participation requirement for families of Pop Warner players (522). He believed that the enormous amount of writing he did professionally and his deep understanding of such concepts as rhetorical strategies and composing processes made the game-summary assignment an appropriate choice (522). The summary deadline was the evening of the Sunday after each Saturday game; the pieces appeared in a local newspaper each Thursday (523).

Martin logged his writing activities during a twelve-week period, noting that he wrote multiple genres, both formal and informal, for his academic job (520). For the game summaries, he received verbal and emailed guidance from the team coordinator. This guidance allowed him to name the genre, define an audience (principally, team families), and recognize specific requirements, such as including as many players as possible each week and mentioning every player at least once, always in a positive light, during the season (523-24). Martin learned that the team coordinator would do a preliminary edit, then pass the summaries on to the newspaper editors (524).

Anson writes that Martin’s first challenge was to record the games through extensive notes on a legal pad, matching players against a team roster. When Martin sat down on the Sunday following the game to write his first summary, he was surprised to find himself “paralyzed” (526). The effort to be accurate while making the brief account “interesting and punchy” took much longer than Martin had anticipated (526-27). Moreover, it earned only derision from his two sons, primarily for its “total English professor speak”: long sentences and “big words” (528).

On advice from his wife, Martin tightened the draft, in his view “[taking] the life completely out of it” (528). When the summary appeared in the newspaper, it had been further shortened and edited, in ways that made no sense to Martin, for example, word substitutions that sometimes opted for “plain[er]” language but other times chose “fancier” diction (530). He notes that he was offered no part in these edits and received no feedback beyond seeing the final published version (529).

Martin experienced similar frustration throughout the season, struggling to intuit and master the conventions of the unfamiliar genre. His extensive strengths were “beside the point” (531); faced with this new context, a “highly successful writer” became “a ‘struggling’ or ‘less effective’ writer” (531-32).

Anson draws on Anne Beaufort’s model of discourse knowledge to analyze Martin’s struggles. He reports that Beaufort lists five “knowledge domains” that affect the ability to write in a particular context:

writing process knowledge, subject matter knowledge, rhetorical knowledge, and genre knowledge, all of which are enveloped and informed by knowledge of the discourse community. (532; italics original)

In his analysis of Martin’s situation, Anson contends that Martin possessed the kind of reflective awareness of both writing process knowledge and rhetorical knowledge that theoretically would allow him to succeed in the new context (533). He notes that some scholarship suggests that such knowledge developed over years of practice can actually impede transfer because familiar genres are in fact “overpracticed,” resulting in “discursive entrenchment,” for example when students cannot break free of a form like the five-paragraph theme (533). Anson argues, however, that because of his “meta-level awareness” of the new situation, Martin was able to make deliberate decisions about how to address the new exigencies (533-34).

Anson further maintains that, as a reasonably attentive sports fan, Martin possessed sufficient subject-matter knowledge to comprehend the broad genre of sports reporting into which the game summaries fell (534-35).

Anson finds genre knowledge and knowledge of the discourse community central to Martin’s challenge. Martin had to accommodate the “unique variation” on sports reporting that the summaries imposed with their focus on children’s activities and their attention to the specific expectations of the families and the team coordinator (535).

Moreover, Anson cites scholarship challenging the notion that any genre can be permanently “stabilized” by codified, uniformly enforced rules (536). On the contrary, this scholarship posits, genres are “ever changing sets of socially acceptable strategies that participants can use to improvise their responses to a particular situation” (Catherine E. Schryer, qtd. in Anson 536), thus underscoring Beaufort’s claim that the nature of the relevant discourse community “subsumes” all other aspects of transfer, including genre knowledge (536).

In Anson’s analysis, the discourse community within which Martin functioned was complex and problematic. Far from unifying around accepted norms, the community consisted of a number of “transient” groups of families and officials who produced unstable “traditions”; moreover, Anson posits that the newspaper editors’ priorities differed from those of the team coordinator and families (537).

The study leads Anson to propose that external factors will usually override the individual strengths writers bring to new tasks. He notes agreement among scholars that “[t]ransfer theories are always ‘negative’,” recognizing that transfer always requires “significant cognitive effort and some degree of training” (539). Anson argues that Martin’s experiences align with theories of “strong negative transfer,” which state that writers will always struggle to adjust to new tasks and contexts (539-40).

Anson urges scholarship on transfer to apply a “principle of uniqueness” that recognizes that each situation brings together a unique set of exigencies and abilities. While noting that Martin is “qualitatively different” from writers in composition classrooms (541), Anson contends that students face similar struggles when they are constantly routed across contexts where genre rules change radically, often because of the preferences of individual instructors (541-42). A foundational course alone, he states, cannot adequately nurture the flexibility students need to navigate these landscapes, nor is there adequate articulation and conceptual consensus across the different disciplines in which students must perform (541). Moreover, he claims, students seldom receive the kind of mentoring that will enable success even when they import strong skills.

In a twist at the conclusion of the article, Anson reveals that he is “Martin” (544). The existence of such a genre-resistant article itself, he suggests, illustrates that his full understanding of the discourse community engaged with a composition journal like College Composition and Communication provided him with “the confidence and authority” to “strategically deviate from the expectations of a genre” in which he was an expert (544). In contrast, in his role as “Martin,” interacting with the Pop Warner community, he lacked this confidence and authority and therefore felt unable “to bend the Pop Warner summary genre to fit his typical flexibility and creativity” (543-44). This sense of constraint, he suggests, drove his/Martin’s search for the “genre stability” (543) that would provide the guidance a writer new to a discourse community needs to succeed.

Thus the ability to mesh a writer’s own practices with the requirements of a genre, he argues, demands more than rhetorical, genre, subject-matter, and procedural knowledge; it demands an understanding of the specific, often unique, discourse community, knowledge which, as in the case of the Pop Warner community, may be unstable, contradictory, or difficult to obtain (539).

 


Head, Samuel L. Burke’s Identification in Facebook. C&C, Mar. 2016. Posted 05/10/2016.

Head, Samuel L. “Teaching Grounded Audiences: Burke’s Identification in Facebook and Composition.” Computers and Composition 39 (2016): 27-40. Web. 05 May 2016.

Samuel L. Head uses Kenneth Burke’s concept of identification to argue for Facebook as a pedagogical tool to increase students’ audience awareness in composition classes.

Head cites a range of scholarship that recognizes the rhetorical skills inherent in students’ engagement with social media, particularly Facebook, and that urges composition specialists to take up this engagement with “very real audiences” (27) to encourage transfer of this kind of audience connection to academic writing (27-28, 29). Noting that, according to the National Research Council, new learning depends on “transfer based on previous learning” (qtd. in Head 28), Head contends that, while much scholarship has explored what Facebook and other digital media have to offer, “the pedagogy of transfer with students’ previous experience and prior knowledge of audience in social media requires more scholarly analysis” (28).

In Head’s view, among the skills developed by participation in social media is the ability to adjust content to different audiences in varied contexts (28). He offers Burkeian identification as a means of theorizing this process and providing practices to encourage transfer. Further analysis of transfer comes from work by D. N. Perkins and Gavriel Salomon, who distinguish between “low road transfer” and “high road transfer.”

Low-road transfer occurs when a learner moves specific skills between fairly similar environments; Head’s example is the use of cooking skills learned at home to a restaurant setting. High-road transfer, in contrast, involves using skills in very different contexts. This kind of transfer requires abstract thinking and reflection in order to recognize the applicability of skills across disparate domains (30). Burke’s theory, Head writes, offers a means of evoking the kind of reflection needed to facilitate high-road transfer from the very different contexts of Facebook and a writing class (30, 31).

Head reports on Burke’s identification as a means of persuasion, distinguishing between classical rhetoric’s focus on deliberate efforts at persuasion and the “subconscious” aspects of identification (32); without identification, according to Dennis Day, persuasion cannot occur (cited in Head 31). Identification allows communicators to show that they are “consubstantial” with audiences, thus “bridg[ing] division” (31). This process invokes shared values in order to win audience adherence to new ideas (32).

Head explores aspects of identification theory, including “cunning” identification in which the values shared between audiences are not genuine but are rather created to generate persuasive identification and therefore work to the extent that the audience believes them to be genuine (32). In particular, he notes analysis by George Cheney that discovers “three main strategies” in Burke’s theory: “[t]he common ground technique,” which focuses on shared aspects; “[i]dentification through antithesis,” or the establishment of a “common enemy”; and “[t]he assumed or transcendent ‘we,'” to create group allegiance (qtd. in Head 32). Current scholarship such as that of Tonja Mackey supports Head’s claim that components of identification inform regular Facebook interaction (33).

Head reports on Facebook’s algorithm for determining how users connect with friends. This process, according to Eli Pariser, creates a “filter bubble” as Facebook attempts to present material of interest to each user (qtd. in Head 33). Head suggests that students may not be aware that this “filter bubble” may be concealing more complex combinations of ideas and information; introduction to the theory of identification in the classroom may make them more alert to the strategies that both link them to like-minded audiences and that direct them away from more challenging encounters (33).

Postings by an anonymous “example Facebook user” illustrate the three Burkeian strategies pointed out by Cheney as they inform a Facebook timeline (34). This user establishes common ground by sharing photos and posts that reflect a religious affiliation as well as an interest in fantasy that connects him to many friends. He establishes a common enemy by posting and then collecting likes in opposition to “mandated health care” (34). Finally, he generates a sense of the “transcendent ‘we'” by appealing to group membership in a National Novel Writing Month (NaNoWriMo) experience (34). These examples, in Head’s view, demonstrate the degree to which identification is a natural component of Facebook interactions.

For Head, the transfer of this inherent identification to an academic environment involves explicit instruction in the theory of identification as well as reflection on the students’ part as to how these actions can be applied in more formal or novel settings. Students can recognize the strategies and moves that constitute identification in their own Facebook interactions and then can locate similar moves in other types of writing, finally applying them consciously as they connect with academic audiences (35).

Head contends that more teachers need to use platforms familiar to students, like Facebook, to teach rhetorical skills and awareness; he urges teachers to share their experiences with these media and to publish analyses of their findings (36). He reports that his own students enjoyed beginning their rhetorical curriculum with a medium with which they were already engaged, using their own work as a starting point (35). He concludes with a schedule of suggested assignments for making the tenets of identification visible in Facebook and transferring awareness of them to academic projects (36-39).


Del Principe and Ihara. Reading at a Community College. TETYC, Mar. 2016. Posted 04/10/2016.

Del Principe, Annie, and Rachel Ihara. “‘I Bought the Book and I Didn’t Need It’: What Reading Looks Like at an Urban Community College.” Teaching English in the Two-Year College 43.3 (2016): 229-44. Web. 10 Mar. 2016.

Annie Del Principe and Rachel Ihara conducted a qualitative study of student reading practices at Kingsborough Community College, CUNY. They held interviews and gathered course materials from ten students over the span of the students’ time at the college between fall 2011 and fall 2013, amassing “complete records” for five (231). They found a variety of definitions of acceptable reading practices across disciplines; they urge English faculty to recognize this diversity, but they also advocate for more reflection from faculty in all academic subject areas on the purposes of the reading they assign and how reading can be supported at two-year colleges (242).

Four of the five students who were intensively studied placed into regular first-year composition and completed Associates’ degrees while at Kingsborough; the fifth enrolled in a “low-level developmental writing class” and transferred to a physician’s assistant program at a four-year institution in 2015 (232). The researchers’ inquiry covered eighty-three different courses and included twenty-three hours of interviews (232).

The authors’ review of research on reading notes that many different sources across institutions and disciplines see difficulty with reading as a reason that students often struggle in college. The authors recount a widespread perception that poor preparation, especially in high school, and students’ lack of effort is to blame for students’ difficulties but contend that the ways in which faculty frame and use reading also influence how students approach assigned texts (230). Faculty, Del Principe and Ihara write, often do not see teaching reading as part of their job and opt for modes of instruction that convey information in ways that they perceive as efficient, such as lecturing extensively and explaining difficult texts rather than helping students work through them (230).

A 2013 examination of seven community colleges in seven states by the National Center for Education and the Economy (NCEE) reported that the kinds of reading and writing students do in these institutions “are not very cognitively challenging”; don’t require students “to do much” with assigned reading; and demand “performance levels” that are only “modest” (231). This study found that more intensive work on analyzing and reflecting on texts occurred predominately in English classes (231). The authors argue that because community-college faculty are aware of the problems caused by reading difficulties, these faculty are “constantly experimenting” with strategies for addressing these problems; this focus, in the authors’ view, makes community colleges important spaces for investigating reading issues (231).

Del Principe and Ihara note that in scholarship by Linda Adler-Kassner and Heidi Estrem and by David Jolliffe as well as in the report by NCEE, the researchers categorize the kinds of reading students are asked to do in college (232-33). The authors state that their “grounded theory approach” (232) differs from the methods in these works in that they

created categories based on what students said about how they used reading in their classes and what they did (or didn’t do) with the assigned reading rather than on imagined ways of reading or what was ostensibly required by the teacher or by the assignment. (233).

This methodology produced “five themes”:

  • “Supplementing lecture with reading” (233). Students reported this activity in 37% of the courses examined, primarily in non-English courses that depended largely on lecture. Although textbooks were assigned, students received most of the information in lectures but turned to reading to “deepen [their] understanding ” or for help if the lecture proved inadequate in some way (234).
  • “Listening and taking notes as text” (233). This practice, encountered in 35% of the courses, involved situations in which a textbook or other reading was listed on the syllabus but either implicitly or explicitly designated as “optional.” Instructors provided handouts or PowerPoint outlines; students combined these with notes from class to create de facto “texts” on which exams were based. According to Del Principe and Ihara, “This marginalization of long-form reading was pervasive” (235).
  • “Reading to complete a task” (233). In 24% of the courses, students reported using reading for in-class assignments like lab reports or quizzes; in one case, a student described a collaborative group response to quizzes (236). Other activities included homework such as doing math problems. Finally, students used reading to complete research assignments. The authors discovered very little support for or instruction on the use and evaluation of materials incorporated into research projects and posit that much of this reading may have focused on “dubious Internet sources” and may have included cut-and-paste (237).
  • “Analyzing text” (233). Along with “reflecting on text,” below, this activity occurred “almost exclusively” in English classes (238). The authors describe assignments calling for students to attend to a particular line or idea in a text or to compare themes across texts. Students reported finding “on their own” that they had to read more slowly and carefully to complete these tasks (238).
  • “Reflecting on text” (233). Only six of the 83 courses asked students to “respond personally” to reading; only one was not an English course (239). The assignments generally led to class discussion, in which, according to the students, few class members participated, possibly because “Nobody [did] the reading” (student, qtd. in Del Principe and Ihara 239; emendation original).

Del Principe and Ihara focus on the impact of instructors’ “following up” on their assignments with activities that “require[d] students to draw information or ideas directly from their own independent reading” (239). Such follow-up surfaced in only fourteen of the 83 classes studied, with six of the fourteen being English classes. Follow-up in English included informal responses and summaries as well as assigned uses of outside material in longer papers, while in courses other than English, quizzes or exams encouraged reading (240). The authors found that in courses with no follow-up, “students typically did not do the reading” (241).

Del Principe and Ihara acknowledge that composition professionals will find the data “disappointing,” but feel that it’s important not to be misdirected by a “specific disciplinary lens” into dismissing the uses students and other faculty make of different kinds of reading (241). In many classes, they contend, reading serves to back up other kinds of information rather than as the principle focus, as it does in English classes. However, they do ask for more reflection across the curriculum. They note that students are often required to purchase expensive books that are never used. They hope to trigger an “institutional inquiry” that will foster more consideration of how instructors in all fields can encourage the kinds of reading they want students to do (242).


3 Comments

Lancaster, Zak. Discourse Templates in They Say/I Say. CCC, Feb. 2016. Posted 03/13/2106.

 

Lancaster, Zak. “Do Academics Really Write This Way? A Corpus Investigation of Moves and Templates in They Say/I Say.College Composition and Communication 67.3 (2016): 437-64. Print.

Zak Lancaster analyzes three corpora of academic writing to assess the usefulness of “templates” provided for student use in the textbook They Say/I Say (TSIS), by Gerald Graff and Cathy Birkenstein. Lancaster ultimately concludes that the most cogent critique of TSIS is not that it encourages students to use “formulaic” constructions but rather that the book does not supply students with the templates that academics actually use and hence, in fact, is not “formulaic” in ways that would most effectively shape students’ understanding of academic discourse (450).

Lancaster focuses on the book’s provision of specific sets of word strings to help students structure their arguments, in particular, first, phrases that acknowledge counter-arguments and second, those that concede to alternative points of view while, in Graff and Birkenstein’s words, “still standing your ground” (qtd. in Lancaster 440). Lancaster recounts that the use of formulas to guide students in incorporating others’ viewpoints has provoked debate, with some analysts endorsing the effort to supply students with explicit language for “moves” in the academic conversations they are expected to enter, and others characterizing the provision of such specific language as a “decontextualized” approach guilty of “reducing argumentation down to a two-part dialogue” (438).

For Lancaster, this debate, though meaningful, begs the basic question of whether the templates provided by TSIS actually “capture the tacitly valued discursive strategies used in academic discourses” (439). Lancaster finds this question important because linguistic analysis indicates that variations in wording shape “different roles for the reader . . . and different authorial personae, or stances,” conveying different values and encouraging different approaches to argumentation (440).

Lancaster cites research showing that what some linguists call “lexical bundles” are indeed common in academic writing across disciplines. “[H]ighly functional” phrases such as “it should be noted that,” or “the extent to which” are used more often by expert writers than by students (441). Lancaster’s example of “hedging formulas” such as “in some cases” or “appears to be” introduces his claim that such formulas have an “interpersonal function” in concert with their “ideational meanings” (442), supplying the same information but creating different valences in the reader/writer relationship.

Research on student texts, Lancaster reports, shows that students often succumb to what some scholars call “myside bias,” struggling to include counterarguments (443). In Lancaster’s view, evidence that students who are able to overcome this bias produce more complex, “mature” arguments (444) justifies strategies like those in TSIS to open students to a more dialogic approach to argument, which they may tend to see as a matter of “winning” rather than negotiating meaning (444). Lancaster claims, however, that TSIS could provide “more systematic attention to the details of language” to offer more substantive guidance in the ways these details affect interpersonal meanings (444).

Lancaster examines three corpora: one of expert academic writing drawing from “almost 100 peer-reviewed journals across disciplines”; one of “829 high-graded papers” by advanced undergraduates and “early graduate students across sixteen fields”; and one of “19,456 directed self-placement (DSP) essays” from the University of Michigan and Wake Forest University (444-45). Lancaster examined each body of writing using “concordancing software” to search for the exact phrases proposed by TSIS, to find other phrases serving the same functions, and to examine the precise contexts for each formula to make sure that it functioned like those featured in TSIS (445). The tables presenting the findings are based on “the normalized frequency” of occurrences rather than the raw numbers (446).

Analysis of the ways in which the writers in the corpora “entertain objections” revealed “six recurring options” that Lancaster ranks as moving from “direct” moves such as “Naming the reader” and “Naming your naysayers” (a characterization quoted from TSIS) through less direct moves that he denotes as “Unattributed” like “One might argue” or a passive-voice construction, to indirect phrases like nominalizations (“Another explanation”) or what linguist Geoff Thompson calls the “Hypothetical-Real” formula: phrases like “At first glance” or “It may appear that” that suggest that the writer will delve beneath the surface to present unrecognized truths (447-48).

Analysis indicates that first-year writers did consider alternative views at frequencies comparable to those in the more advanced work. In general, indirect phrases were much more commonly used than direct ones in all corpora; Graff and Birkenstein’s “Naming your naysayers” was the least frequently used option (448-49). Though they did “name the readers” more than the first-year writers, advanced writers preferred indirect approaches at higher levels than less advanced writers (450).

Lancaster posits that the use of more indirect choices by more advanced writers, counter to the guidance in TSIS, suggests that writers resist claiming to know what readers think, a form of “interpersonal tact” (448). Importantly for Lancaster, the specific phrasings offered in TSIS “do not appear in any of the corpora” (450). Similar but subtly different phrasings perform these functions (450-51).

Lancaster’s discussion of concession notes that while TSIS describes this move in terms of “‘overcoming’ objections” (qtd. in Lancaster 452), for linguists, such interactions create “solidarity with interlocutors by affirming and validating their views” (452). Lancaster draws on the work of James R. Martin and Peter R. White to base his analysis on the concept of “concede + counter,” in which a concession move is signaled with “high-certainty adverbials” like “undoubtedly” or “to be sure,” while the counter follows through the use of words like “yet,” or “at the same time.” Lancaster notes that in advanced samples, the opening concession phrase may not even appear (452), with the result that the move may be inconsistently tagged by the software (453).

Findings indicate more explicit use of concession by the less experienced writers (452). Lancaster proposes that this difference may result from the placement-essay writers’ sense that they were expected to “strike an adversarial stance” requiring more “direct language”; conversely, the software may not have picked up more subtle moves by more advanced writers (453). First-year samples were much more likely to include the kinds of wordings TSIS recommends, such as “It is true that. . . .” (454). However, none of the writers at any level used “personalized and overt signals” like “I concede that” or “Proponents of X are right” (454).

In investigating the “counter,” Lancaster discovered that the direct phrases encouraged by TSIS, such as “I still VERB that,” were not favored by any group; shorter, less direct wordings predominated. In fact, “On the other hand,” recommended by TSIS, tended to indicate a contrast between two positions rather than a “counter” following a concession (454).

Lancaster extracts three conclusions: all groups opted most often for indirect means of considering objections; writers consistently chose to “eagerly” endorse shared viewpoints when conceding; and less experienced writers used more direct concessions like those suggested by TSIS (455).

Differences in genre and context, Lancaster notes, may affect the validity of his findings. However, he sees “interpersonal tact” as “an implicit guiding principle” that is “pervasive” in academic writing (456-57). He notes that TSIS formulas do use hedges, but posits that the authors may not “see” these interpersonal markers because the hedging phrases have become naturalized (457).

In Lancaster’s view, TSIS often echoes a common perception of argument as a form of combat; he argues that the best academic writing more fully resembles a conversational exchange, and suggests that attention to the specific details of academic language provided by “systematic analysis” (459) such as corpora research can refocus instruction on how academics do incorporate interpersonal meanings into their discourse and how students can best use these moves when they wish to enter academic conversations (458-59).