College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


1 Comment

Moore & MacArthur. Automated Essay Evaluation. JoWR, June 2016. Posted 10/04/2016.

Moore, Noreen S., and Charles A. MacArthur. “Student Use of Automated Essay Evaluation Technology During Revision.” Journal of Writing Research 8.1 (2016): 149-75. Web. 23 Sept. 2016.

Noreen S. Moore and Charles A. MacArthur report on a study of 7th- and 8th-graders’ use of Automated Essay Evaluation technology (AEE) and its effects on their writing.

Moore and MacArthur define AEE as “the process of evaluating and scoring written prose via computer programs” (M. D. Shermis and J. Burstein, qtd. in Moore and MacArthur 150). The current study was part of a larger investigation of the use of AEE in K-12 classrooms (150, 153-54). Moore and MacArthur focus on students’ revision practices (154).

The authors argue that such studies are necessary because “AEE has the potential to offer more feedback and revision opportunities for students than may otherwise be available” (150). Teacher feedback, they posit, may not be “immediate” and may be “ineffective” and “inconsistent” as well as “time consuming,” while the alternative of peer feedback “requires proper training” (151). The authors also posit that AEE will increasingly become part of the writing education landscape and that teachers will benefit from “participat[ing]” in explorations of its effects (150). They argue that AEE should “complement” rather than replace teacher feedback and scoring (151).

Moore and MacArthur review extant research on two kinds of AEE, one that uses “Latent Semantic Analysis” (LSA) and one that has been “developed through model training” (152). Studies of an LSA program owned by Pearson and designed to evaluate summaries compared the program with “word-processing feedback” and showed enhanced improvement across many traits, including “quality, organization, content, use of detail, and style” as well as time spent on revision (152). Other studies also showed improvement. Moore and MacArthur note that some of these studies relied on scores from the program itself as indices of improvement and did not demonstrate any transfer of skills to contexts outside of the program (153).

Moore and MacArthur contend that their study differs from previous research in that it does not rely on “data collected by the system” but rather uses “real time” information from think-aloud protocols and semi-structured interviews to investigate students’ use of the technology. Moreover, their study reveals the kinds of revision students actually do (153). They ask:

  • How do students use AEE feedback to make revisions?
  • Are students motivated to make revisions while using AEE technology?
  • How well do students understand the feedback from AEE, both the substantive feedback and the conventions feedback? (154)

The researchers studied six students selected to be representative of a 12-student 7th- and 8th-grade “literacy class” at a private northeastern school whose students exhibited traits “that may interfere with school success” (154). The students were in their second year of AEE use and the teacher in the third year of use. Students “supplement[ed]” their literacy work with in-class work using the “web-based MY Access!” program (154).

Moore and MacArthur report that “intellimetric” scoring used by MY Access! correlates highly with scoring by human raters (155). The software is intended to analyze “focus/coherence, organization, elaboration/development, sentence structure, and mechanics/conventions” (155).

MY Access provides feedback through MY Tutor, which responds to “non-surface” issues, and MY Editor, which addresses spelling, punctuation, and other conventions. MY Tutor provides a “one sentence revision goal”; “strategies for achieving the goal”; and “a before and after example of a student revising based on the revision goal and strategy” (156). The authors further note that “[a]lthough the MY Tutor feedback is different for each score point and genre, the same feedback is given for the same score in the same genre” (156). MY Editor responds to specific errors in each text individually.

Each student submitted a first and revised draft of a narrative and an argumentative paper, for a total of 24 drafts (156). The researchers analyzed only revisions made during the think-aloud; any revision work prior to the initial submission did not count as data (157).

Moore and MacArthur found that students used MY Tutor for non-surface feedback only when their submitted essays earned low scores (158). Two of the three students who used the feature appeared to understand the feedback and used it successfully (163). The authors report that for the students who used it successfully, MY Tutor feedback inspired a larger range of changes and more effective changes in the papers than feedback from the teacher or from self-evaluation (159). These students’ changes addressed “audience engagement, focusing, adding argumentative elements, and transitioning” (159), whereas teacher feedback primarily addressed increasing detail.

One student who scored high made substantive changes rated as “minor successes” but did not use the MY Tutor tool. This student used MY Editor and appeared to misunderstand the feedback, concentrating on changes that eliminated the “error flag” (166).

Moore and MacArthur note that all students made non-surface revisions (160), and 71% of these efforts were suggested by AEE (161). However, 54.3% of the total changes did not succeed, and MY Editor suggested 68% of these (161). The authors report that the students lacked the “technical vocabulary” to make full use of the suggestions (165); moreover, they state that “[i]n many of the instances when students disagreed with MY Editor or were confused by the feedback, the feedback seemed to be incorrect” (166). The authors report other research that corroborates their concern that grammar checkers in general may often be incorrect (166).

As limitations, the researchers point to the small sample, which, however, allowed access to “rich data” and “detailed description” of actual use (167). They note also that other AEE program might yield different results. Lack of data on revisions students made before submitting their drafts also may have affected the results (167). The authors supply appendices detailing their research methods.

Moore and MacArthur propose that because the AEE scores prompt revision, such programs can effectively augment writing instruction, but recommend that scores need to track student development so that as students score near the maximum at a given level, new criteria and scores encourage more advanced work (167-68). Teachers should model the use of the program and provide vocabulary so students better understand the feedback. Moore and MacArthur argue that effective use of such programs can help students understand criteria for writing assessment and refine their own self-evaluation processes (168).

Research recommendations include asking whether scores from AEE continue to encourage revision and investigating how AEE programs differ in procedures and effectiveness. The study did not examine teachers’ approaches to the program. Moore and MacArthur urge that stakeholders, including “the people developing the technology and the teachers, coaches, and leaders using the technology . . . collaborate” so that AEE “aligns with classroom instruction” (168-69).


Zuidema and Fredricksen. Preservice Teachers’ Use of Resources. August RTE. Posted 09/25/2016.

Zuidema, Leah A., and James E. Fredricksen. “Resources Preservice Teachers Use to Think about Student Writing.” Research in the Teaching of English 51.1 (2016): 12-36. Print.

Leah A. Zuidema and James E. Fredricksen document the resources used by students in teacher-preparation programs. The study examined transcripts collected from VoiceThread discussions among 34 preservice teachers (PSTs) (16). The PSTs reviewed and discussed papers provided by eighth- and ninth-grade students in Idaho and Indiana (18).

Zuidema and Fredricksen define “resource” as “an aid or source of evidence used to help support claims; an available supply that can be drawn upon when needed” (15). They intend their study to move beyond determining what writing teachers “get taught” to discovering what kinds of resources PSTs actually use in developing their theories and practices for K-12 writing classrooms (13-14).

The literature review suggests that the wide range of concepts and practices presented in teacher-preparation programs varies depending on local conditions and is often augmented by students’ own educational experiences (14). The authors find very little systematic study of how beginning teachers actually draw on the methods and concepts their training provides (13).

Zuidema and Fredricksen see their study as building on prior research by systematically identifying the resources teachers use and assigning them to broad categories to allow a more comprehensive understanding of how teachers use such sources to negotiate the complexities of teaching writing (15-16).

To gather data, the researchers developed a “community of practice” by building their methods courses around a collaborative project focusing on assessing writing across two different teacher-preparation programs (16-17). Twenty-six Boise State University PSTs and 8 from a small Christian college, Dordt, received monthly sets of papers from the eighth and ninth graders, which they then assessed individually and with others at their own institutions.

The PSTs then worked in groups through VoiceThread to respond to the papers in three “rounds,” first “categoriz[ing]” the papers according to strengths and weaknesses; then categorizing and prioritizing the criteria they relied on; and finally “suggest[ing] a pedagogical plan of action” (19). This protocol did not explicitly ask PSTs to name the resources they used but revealed these resources via the transcriptions (19).

The methods courses taught by Zuidema and Fredricksen included “conceptual tools” such as “guiding frameworks, principles, and heuristics,” as well as “practical tools” like “journal writing and writer’s workshop” (14). PSTs read professional sources and participated in activities that emphasized the value of sharing writing with students (17). Zuidema and Fredricksen contend that a community of practice in which professionals explain their reasoning as they assess student writing encourages PSTs to “think carefully about theory-practice connections” (18).

In coding the VoiceThread conversations, the researchers focused on “rhetorical approaches to composition” (19), characterized as attention to “arguments and claims . . . , evidence and warrants,” and “sources of support” (20). They found five categories of resources PSTs used to support claims about student writing:

  • Understanding of students and student writing (9% of instances)
  • Knowledge of the context (10%)
  • Colleagues (11%)
  • PSTs’ roles as writers, readers, and teachers (17%)
  • PSTs’ ideas and observations about writing (54%) (21)

In each case, Zuidema and Fredricksen developed subcategories. For example, “Understanding of students and student writing” included “Experience as a student writer” and “Imagining students and abilities,” while “Colleagues” consisted of “Small-group colleagues,” “More experienced teachers,” “Class discussion/activity,” and “Professional reading” (23).

Category 1, “Understanding of students and student writing,” was used “least often,” with PSTs referring to their own student-writing experiences only six times out of 435 recorded instances (24). The researchers suggest that this category might have been used more had the PSTs been able to interact with the students (24). They see “imagining” how students are reacting to assignments important as a “way [teachers] can develop empathy” and develop interest in how students understand writing (24).

Category 2, “Knowledge of Context as a Resource,” was also seldom used. Those who did refer to it tended to note issues involving what Zuidema and Fredricksen call GAPS: rhetorical awareness of “genre, audience, purpose, and situation of the writing” (25). Other PSTs noted the role of the prompt in inviting strong writing. The researchers believe these types of awarenesses encourage more sophisticated assessment of student work (25).

The researchers express surprise that Category 3, “Colleagues,” was used so seldom (26). Colleagues in the small groups were cited most often, but despite specific encouragement to do so, several groups did not draw on this resource. Zuidema and Fredricksen note that reference to the resource increased through the three rounds. Also surprising was the low rate of reference to mentors and experienced teachers, to class discussion, activities, and assignments: Only one participant mentioned a required “professional reading” as a resource (27). Noting that the PSTs may have used concepts from mentors and class assignments without explicitly naming them, the authors note prior research suggesting that reference to outside sources can be perceived as undercutting the authority conferred by experience (27).

In Category 4, “Roles as Resources,” Zuidema and Fredricksen note that PSTs were much more likely to draw on their roles as readers or teachers than as writers (28). Arguing that a reader perspective augured an awareness of the importance of audience, the researchers note that most PSTs in their study perceived their own individual reader responses as most pertinent, suggesting the need to emphasize varied perspectives readers might bring to a text (28).

Fifty-four percent of the PSTs references invoked “Writing as a Resource” (29). Included in this category were “imagined ideal writing,” “comparisons across student writing,” “holistic” references to “whole texts,” and “excerpts” (29-31). In these cases, PSTs’ uses of the resources ranged from “a rigid, unrhetorical view of writing” in which “rules” governed assessment (29) to a more effective practice that “connected [student writing] with a rhetorical framework” (29). For example, the use of excerpts could be used for “keeping score” on “checklists” or as a means of noting patterns and suggesting directions for teaching (31). Comparisons among students and expectations for other students at similar ages, Zuidema and Fredricksen suggest, allowed some PSTs to reflect on developmental issues, while holistic evaluation allowed consideration of tone, audience, and purpose (30).

Zuidema and Fredricksen conclude that in encouraging preservice teachers to draw on a wide range of resources, “exposure was not enough” (32), and “[m]ere use is not the goal” (33). Using their taxonomy as a teaching tool, they suggest, may help PSTs recognize the range of resources available to them and “scaffold their learning” (33) so that they will be able to make informed decisions when confronted with the multiple challenges inherent in today’s diverse and sometimes “impoverished” contexts for teaching writing (32).


1 Comment

Webb-Sunderhaus, Sara. “Tellability” and Identity Performance. Sept. CE, 2016. Posted 09/18/2016.

Webb-Sunderhaus, Sara. “‘Keep the Appalachian, Drop the Redneck’: Tellable Student Narratives of Appalachian Identity.” College English 79.1 (2016): 11-33. Print.

Sara Webb-Sunderhaus explores the concept of “tellability” as a means of understanding how students in composition classes perform identities. She argues that these identities often emerge from the relationship between their individual experiences and public discourses validated by the audiences they are likely to encounter.

Webb-Sunderhaus’s specific focus is the construction of identity by people who designate themselves or are designated by others as “Appalachian.” Self-identifying as an “Urban Appalachian”—that is, as an individual who has moved out of a region considered part of Appalachia to a larger city (13, 31n5), Webb-Sunderhaus conducted an ethnographic study at two anonymous institutions in Appalachia (13). She examines the classroom activity, written work, and responses to interviews of six students in writing classes at these institutions in light of the students’ connection to Appalachia.

Webb-Sunderhaus presents contested definitions of Appalachia, including those of the Appalachian Regional Council and the Central Appalachian Network, both of which use geographical measures (14). In contrast, Webb-Sunderhaus cites Benedict Anderson’s definition of Appalachia as “an imagined community” (qtd. in Webb-Sunderhaus14), and that of Appalachian Studies scholar Allen Batteau as “a literary and a political invention rather than a geographical discovery” (qtd. in Webb-Sunderhaus14). Webb-Sunderhaus argues that efforts to define Appalachianness may miss the diversity of individuals who identify with the region; she stresses that this identity is “a cultural identity, rooted in the place of the Appalachian mountains, but not necessarily restricted to this place alone” (16).

Tellability, a concept used by scholars in social studies and folklore, involves the relationship between a particular narrative and widespread public discourses about a given phenomenon, in this case, Appalachianness (16). These public discourses determine which narratives accord with common assumptions and widely shared impressions of the phenomenon. A narrative that is tellable fits and reinforces the extant public narratives; accounts that resist these public narratives may not earn what Michael Kearns calls “the audience’s active validation” (16) and are therefore not tellable (16-17). Tellability, Webb-Sunderhaus maintains, is a function of audience. Writers and speakers are aware of the discourses their audiences expect based on the given rhetorical constraints; what is tellable in one context may be untellable in another (22).

This process of negotiating identities through astute choices of tellable narratives, Webb-Sunderhaus writes, accords with Judith Butler’s view of identity as “a performance that is repeated” by “a reenactment and reexperiencing of a set of meanings already established” (Butler, qtd. in Webb-Sunderhaus 17). Tellable narratives provide what Debra Journet calls “tropes of authenticity” necessary to such re-enactment (qtd. in Webb-Sunderhaus 21).

Webb-Sunderhaus interprets her study of how tellability influences students’ rhetorical decisions as they perform identities in a classroom setting as evidence that students exhibit considerable awareness of what kinds of narratives are tellable and that in a number of cases, these decisions were based on what the students assumed the instructor expected (29). In one case, a student “fabricated” details (23) to conform to what she saw as the teacher’s belief that affinity with nature is a feature of Appalachianness; in contrast, the reality of the student’s childhood did not meet this expectation and was therefore an untellable response to an essay assignment (23-24).

Drawing on Nedra Reynolds, Webb-Sunderhaus notes a distinction between “perceived” and “conceived” spaces as components of identity. A perceived space designates physical surroundings that can be apprehended through the senses, such as the landscape of Appalachia, while a conceived space is the way an environment is represented mentally, incorporating sociocultural components, attitudes, and values (20).

Students in Webb-Sunderhaus’s study, she writes, exhibited an understanding of this distinction, noting ways in which being born in or from Appalachia often contrasted with their relationship to Appalachia as individuals. One student acknowledged being physically linked to Appalachia but rejected even some of the “positive” stereotypes she felt were culturally associated with the region (25). Another specifically disconnected her Appalachian birthplace and subsequent experiences, arguing that tellable narratives of Appalachians as tied to place did not represent her own willingness to “explore the world” (“Gladys,” qtd. in Webb-Sunderhaus 26).

Webb-Sunderhaus sees in this type of resistance to common tellable narratives a form of what Ann K. Ferrell calls “stigma management” (28). Many tellable narratives of Appalachia focus on negatives like poverty, illiteracy, narrow-mindedness, and even criminality and incest (18). In Webb-Sunderhaus’s view, resistance to an Appalachian identity defined by such narratives can act as a distancing strategy when such narratives are invoked (28). At the same time, according to Webb-Sunderhaus, the student who rejected the “down-home” component of an Appalachian identity may have recognized that in the setting of a research study, her more cosmopolitan identity narrative would be tellable in a way that it might not be in other contexts (28).

Webb-Sunderhaus emphasizes the power of teachers in “inviting” and approving particular narratives (28). For example, she writes that by picking up on a student’s reluctant reference to moonshining in his family history and sharing a similar family history, she encouraged him to incorporate this component of the public discourse about Appalachia into his own identity (21). Similarly, the student who embellished her narrative was praised by the teacher for her “imagery and pastoralism” (qtd. in Webb-Sunderhaus 22); such responses, Webb-Sunderhaus contends, quoting Thomas Newkirk, reveal “the seductiveness of deeply rooted and deeply satisfying narratives that place us in familiar moral positions” (qtd. in Webb-Sunderhaus 24).

The power of this seductiveness, in Webb-Sunderhaus’s view, creates rhetorical pressure on students who are asked to perform identities in writing classrooms. While teachers hope that students will produce writing that authentically represents their views and experiences, the authenticity and “reliability” of a performance can easily be judged by its adherence to the common and therefore tellable public discourses in which the teacher may be immersed (28-29). Responding to Zan Meyer Gonçalves, Webb-Sunderhaus writes that the hope of making a classroom a place where students can “feel honest and safe” (qtd. in Webb-Sunderhaus 29) may overlook the degree to which students’ educational histories have led them to make strategic decisions (29) about how to “negotiate successfully [a] particular literacy event” (24).

In this view, the kinds of clichéd endorsements of popular discourses that teachers would like to see students overcome may be among the options the teachers are inadvertently inviting as they convey their own sense that some narratives are tellable in their classrooms while others are not (30).

 


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).


1 Comment

Arnold, Lisa. International Response to Rhet/Comp Theory. CS, Spring 2016. Posted 06/14/2016.

Arnold, Lisa R. “‘This is a Field that’s Open, not Closed’: Multilingual and International Writing Faculty Respond to Composition Theory.” Composition Studies 44.1 (2016): 72-88. Web. o2 June 2016.

Lisa R. Arnold discusses the responses of teachers at the American University of Beirut (AUB) to canonical texts of rhetoric and composition theory, in particular “Language Difference in Writing: A Translingual Approach,” by Bruce Horner, Min-Zhan Lu, Jacqueline Jones Royster, and John Trimbur. Arnold notes that in Lebanon, where translingualism is an “everyday reality” (80), the question of how to accommodate and value multiple language practices can resonate very differently than it does in the presumably monolingual North-American context in which the theory was proposed.

As the first director of the AUB writing program, Arnold hoped to provide faculty with professional development opportunities (75), at the same time responding to questions from scholars like Mary N. Muchiri and her colleagues and Christiane Donahue that ask composition professionals in North America to recognize “the diverse pedagogical traditions, methods of research, and values attached to literacy in non-U.S. contexts” (72).

As an “American-style university that is a leader in the Middle-East North-Africa (MENA) region,” AUB presents an opportunity for the study of such issues because it is “unique” among institutions outside of North America in having four “full-time, professorial-rank” lines for rhetoric and composition PhDs; the university also plans to implement an M.A. in rhetoric and composition (74).

In order to further faculty engagement with composition theory, the university offered a ten-session seminar during the 2013-2014 academic year. These seminars, attended by seventeen AUB faculty with varied levels of experience teaching in the program, explored a range of topics addressing writing theory and instruction (75). The final sessions each semester addressed teaching writing in the particular context of Lebanon/AUB.

Arnold attended all seminar sessions as a participant-observer and subsequently conducted interviews with fifteen participants, asking them to focus on what seemed “most relevant” to teaching and to the specific environment of AUB (77). Five faculty who had audited a previous graduate course on writing theory and pedagogy facilitated the sessions. Participants also completed an anonymous survey (76).

General responses indicated that faculty found rhetoric and composition theory to be “open,” “tolerant,” and “concrete,” engaged with students as individual writers (77-78). The issue of translingualism was among the discussions that inspired a range of responses (78), especially in regard to the question of how rhetoric and composition theory applied to teaching in Lebanon (78).

The Horner et al. article, which attendees read during the final fall-semester session, addressed the monolingual audience that presumably characterizes North American contexts. To this audience, according to Arnold, Horner et al. argue that rather than being treated as “an obstacle to be overcome,” difference in language should be viewed through a lens that “takes advantage of and appreciates students’ different strengths in English as well as in other languages and . . . reflects the heterogeneity of communicative practices worldwide” (79).

AUB faculty expressed interest in the theory but also voiced concerns about what it might mean in their context when implemented in the classroom. Many seminar attendees brought backgrounds in EFL or ESL to the sessions; Arnold reports general agreement that a “more flexible approach toward language difference” would be worth considering (79).

Concern, however, seemed to center around the degree to which a more tolerant attitude toward error might impact the need for students to learn formal English in order to succeed in the non-U.S. context (80). Arnold writes that in Lebanon, as in the African contexts discussed by Muchiri et al., universities like AUB are “highly selective” and “English carries a different value for its users” (80). She notes the concerns of “Rania,” who posits that British universities expect less expertise in English from students from “developing nations” who will presumably return home after graduation than from native speakers. Rania fears that allowing students flexibility in their use of English will become a process of withholding “correct English” in order to impose “a new form of colonialism” (81). However, according to Arnold, Rania subsequently appreciated the opportunities for learning offered by a translingual approach (81).

The response of “Rasha” similarly indicates ambivalence toward translingualism. Students either liked the opportunity to use Arabic or, in her words, “just hated it” (qtd. in Arnold 82), but she found that discussions of whether or not such multilingual practice was appropriate increased student engagement with issues of language use itself (82). Other examples demonstrate that students do translingual work regardless of the teacher’s goals, for example, using Arabic for group work (82). A number of the teachers drew on their own experiences as learners of multiple languages to encourage students to embrace the challenges involved in a multilingual context. Arnold reports that these teachers felt empowered by translingual theory to draw on language difference as a resource (84-85).

Teachers like “Malik,” however, highlighted the importance of providing students with the kinds of English skills that would serve them in their culture (83), while “Jenna” expressed concerns that the increased tolerance urged by Horner et al. would lead students to become “too confident” that audiences would understand translingually inflected communication: “[Students] get this false perception of abilities and skills which are not there” (qtd. in Arnold 85).

For Arnold, her experience working with writing instructors charged with teaching English outside of an English-speaking environment gives presence to the theoretical precepts of translingualism. She notes that graduates of rhetoric and composition programs may often find themselves taking jobs or providing resources to colleagues outside of the North-American context, and she urges these graduates to attend to the degree to which their multilingual colleagues are often already unacknowledged “experts in their own right” with regard to working with language difference (87):

[T]here is a complexity to literacy practices and pedagogies that practitioners outside of North America understand deeply, and from which those of us trained in a presumably monolingual context can learn. (87)

 


Comer and White. MOOC Assessment. CCC, Feb. 2016. Posted 04/18/2016.

Comer, Denise K., and Edward M. White. “Adventuring into MOOC Writing Assessment: Challenges, Results, and Possibilities.” College Composition and Communication 67.3 (2016): 318-59. Print.

Denise K. Comer and Edward M. White explore assessment in the “first-ever first-year-writing MOOC,” English Composition I: Achieving Expertise, developed under the auspices of the Bill & Melinda Gates Foundation, Duke University, and Coursera (320). Working with “a team of more than twenty people” with expertise in many areas of literacy and online education, Comer taught the course (321), which enrolled more than 82,000 students, 1,289 of whom received a Statement of Accomplishment indicating a grade of 70% or higher. Nearly 80% of the students “lived outside the United States” and for a majority, English was not the first language, although 59% of these said they were “proficient or fluent in written English” (320). Sixty-six percent had bachelor’s or master’s degrees.

White designed and conducted the assessment, which addressed concerns about MOOCs as educational options. The authors recognize MOOCs as “antithetical” (319) to many accepted principles in writing theory and pedagogy, such as the importance of interpersonal instructor/student interaction (319), the imperative to meet the needs of a “local context” (Brian Huot, qtd. in Comer and White 325) and a foundation in disciplinary principles (325). Yet the authors contend that as “MOOCs are persisting,” refusing to address their implications will undermine the ability of writing studies specialists to influence practices such as Automated Essay Scoring, which has already been attempted in four MOOCs (319). Designing a valid assessment, the authors state, will allow composition scholars to determine how MOOCs affect pedagogy and learning (320) and from those findings to understand more fully what MOOCs can accomplish across diverse populations and settings (321).

Comer and White stress that assessment processes extant in traditional composition contexts can contribute to a “hybrid form” applicable to the characteristics of a MOOC (324) such as the “scale” of the project and the “wide heterogeneity of learners” (324). Models for assessment in traditional environments as well as online contexts had to be combined with new approaches that addressed the “lack of direct teacher feedback and evaluation and limited accountability for peer feedback” (324).

For Comer and White, this hybrid approach must accommodate the degree to which the course combined the features of an “xMOOC” governed by a traditional academic course design with those of a “cMOOC,” in which learning occurs across “network[s]” through “connections” largely of the learners’ creation (322-23).

Learning objectives and assignments mirrored those familiar to compositionists, such as the ability to “[a]rgue and support a position” and “[i]dentify and use the stages of the writing process” (323). Students completed four major projects, the first three incorporating drafting, feedback, and revision (324). Instructional videos and optional workshops in Google Hangouts supported assignments like discussion forum participation, informal contributions, self-reflection, and peer feedback (323).

The assessment itself, designed to shed light on how best to assess such contexts, consisted of “peer feedback and evaluation,” “Self-reflection,” three surveys, and “Intensive Portfolio Rating” (325-26).

The course supported both formative and evaluative peer feedback through “highly structured rubrics” and extensive modeling (326). Students who had submitted drafts each received responses from three other students, and those who submitted final drafts received evaluations from four peers on a 1-6 scale (327). The authors argue that despite the level of support peer review requires, it is preferable to more expert-driven or automated responses because they believe that

what student writers need and desire above all else is a respectful reader who will attend to their writing with care and respond to it with understanding of its aims. (327)

They found that the formative review, although taken seriously by many students, was “uneven,” and students varied in their appreciation of the process (327-29). Meanwhile, the authors interpret the evaluative peer review as indicating that “student writing overall was successful” (330). Peer grades closely matched those of the expert graders, and, while marginally higher, were not inappropriately high (330).

The MOOC provided many opportunities for self-reflection, which the authors denote as “one of the richest growth areas” (332). They provide examples of student responses to these opportunities as evidence of committed engagement with the course; a strong desire for improvement; an appreciation of the value of both receiving and giving feedback; and awareness of opportunities for growth (332-35). More than 1400 students turned in “final reflective essays” (335).

Self-efficacy measures revealed that students exhibited an unexpectedly high level of confidence in many areas, such as “their abilities to draft, revise, edit, read critically, and summarize” (337). Somewhat lower confidence levels in their ability to give and receive feedback persuade the authors that a MOOC emphasizing peer interaction served as an “occasion to hone these skills” (337). The greatest gain occurred in this domain.

Nine “professional writing instructors” (339) assessed portfolios for 247 students who had both completed the course and opted into the IRB component (340). This assessment confirmed that while students might not be able to “rely consistently” on formative peer review, peer evaluation could effectively supplement expert grading (344).

Comer and White stress the importance of further research in a range of areas, including how best to support effective peer response; how ESL writers interact with MOOCs; what kinds of people choose MOOCs and why; and how MOOCs might function in WAC/WID situations (344-45).

The authors stress the importance of avoiding “extreme concluding statements” about the effectiveness of MOOCs based on findings such as theirs (346). Their study suggests that different learners valued the experience differently; those who found it useful did so for varied reasons. Repeating that writing studies must take responsibility for assessment in such contexts, they emphasize that “MOOCs cannot and should not replace face-to-face instruction” (346; emphasis original). However, they contend that even enrollees who interacted briefly with the MOOC left with an exposure to writing practices they would not have gained otherwise and that the students who completed the MOOC satisfactorily amounted to more students than Comer would have reached in 53 years teaching her regular FY sessions (346).

In designing assessments, the authors urge, compositionists should resist the impulse to focus solely on the “Big Data” produced by assessments at such scales (347-48). Such a focus can obscure the importance of individual learners who, they note, “bring their own priorities, objectives, and interests to the writing MOOC” (348). They advocate making assessment an activity for the learners as much as possible through self-reflection and through peer interaction, which, when effectively supported, “is almost as useful to students as expert response and is crucial to student learning” (349). Ultimately, while the MOOC did not succeed universally, it offered many students valuable writing experiences (346).


Geiger II, T J. “Relational Labor” in Composition. CS, Sept. 2015. Posted 11/23/2015.

Geiger II, T J. “An Intimate Discipline? Writing Studies, Undergraduate Majors, and Relational Labor.” Composition Studies 43.2 (2015): 92-112. Web. 03 Nov. 2015.

T J Geiger II examines undergraduate writing majors as sites in which “relational labor” forms a large part of faculty activities and shapes student perceptions. He considers the possibility that, despite the tendency to view dedicated writing majors as a step toward disciplinary status, the centrality of relational labor to writing instruction may undercut this status. Further, he addresses the concern that support for writing majors may devalue writing instruction itself (94). He focuses on “what the field learns” about these concerns when it listens to students in these programs (98).

Through surveys and interviews with undergraduate majors in “independent writing programs” at a “Private Research University” and a private “Liberal Arts College” (98), Geiger establishes “relational labor” as work done between faculty and undergraduate writing majors in which the personal connections formed contribute to the students’ representations of their learning. The students discussed and quoted consider their relationships with faculty crucial to their “personal development” (“Mark,” qtd. in Geiger 99), which Geiger characterizes in this student’s case “as synonymous with writing development” (99; emphasis original). He argues that faculty attentiveness to the affective components of writing instruction provides students with a sense of a caring audience interested not just in conveying the technical aspects of writing but also in fostering the growth of “unique” individuals through social encounters (102):

Interactions with faculty, part of the context for writing, encourage not only writing majors’ literacy acquisition, but also a sense of themselves as individuals who matter, which in turn can fuel their capacity to take rhetorical action. (99)

Geiger develops this picture of faculty engaged in relational labor against a range of scholarship that has expressed concern about “the ideological complex that figures the composition teacher as a maid/mother disciplinarian,” a characterization he attributes to Susan Miller’s 1991 critique (106). Similarly, he addresses Kelly Ritter’s critique of a “gendered ideology of ‘help'” that Ritter sees as potentially “counterproductive to the discipline of composition studies as a whole” (qtd. in Geiger 106).

Geiger detects justification for these concerns in students’ use of terms like “lovely,” “nice,” and “help” in describing their interactions with faculty (106). His question is whether accepting the role of empathetic helper or the centrality to writing instruction of affective responsiveness necessarily restricts the field’s focus to the “teaching of writing” rather than “teaching about writing” (96; emphasis original).

These concerns accord with those expressed in a larger debate about whether the field should “distance” itself from the constraints that some see as imposed by first-year writing courses, concerns that Geiger notes are themselves broached in affective terms of escape and freedom, indicating that attention to “feeling” permeates all levels of the field (96).

Among the specific concerns that Geiger explores are the ways in which intensive interpersonal investment in students and their work can intersect with professional exigencies. He notes the Modern Language Association’s 2006 report, “Still Standing;: The Associate Professor Survey,” which finds that women in the field report marginally less time spent on research and marginally more on teaching than men; these small differences seem to add up over time to a slower path to promotion for female faculty (107-08). In addition, he addresses the possibility that students who cast faculty as empathetic helpers downplay their role as experts with valuable knowledge to convey (106).

In Geiger’s view, the disciplinary promise of a focus on “teaching about writing” need not be at odds with a pedagogy that values developmental relationships between faculty and students (109). He cites student responses that express appreciation not just for the personal interaction but also for the access to professional expertise provided by faculty during the interactions: He quotes “Jeremiah,” for whom “faculty in the writing program understand themselves as not just research producers, but also as people working with their students” (qtd. in Geiger 102). Indeed, Geiger claims, “students recognize the need for informed care” (108; emphasis original). Such an understanding on the part of students, Geiger argues, demonstrates that investment in a writing major need not crowd out pedagogical value (102), while, conversely, focus on the teaching of writing through an ideology of “care” need not interfere with more intensive study of writing as disciplinary content (107).

Students interviewed do recognize the professional burdens with which faculty must contend and value the personal investment some faculty are still able to make in students’ individual projects and growth (104); Geiger advocates for ongoing consideration of how this ubiquitous and clearly valued kind of labor “is distributed within a program and a writing major” (108).

In addition, Geiger argues that developing as writers through close working relationships with faculty instills in students an understanding of writing as a rhetorical process:

situated, not context-free; social, not solely personal; collaborative, not entirely individual; and (though less often) explicitly politically implicated, not neutral. (108)

Building on the collection by Greg A. Giberson and Thomas A. Moriarty, Geiger urges composition professionals to attend not just to “what we are becoming” (108; emphasis original) but also to “who . . . writing majors (i.e., students) are becoming” (109; emphasis original). Attention to the students’ own perceptions, he contends, provides informative indications of these attainments (109).


3 Comments

Combs, Frost, and Eble. Collaborative Course Design in Scientific Writing. CS, Sept. 2015. Posted 11/12/15.

Combs, D. Shane, Erin A. Frost, and Michelle F. Eble. “”Collaborative Course Design in Scientific Writing: Experimentation and Productive Failure.” Composition Studies 43.2 (2015): 132-49. Web. 11 Nov. 2015.

Writing in the “Course Design” section of Composition Studies, D. Shane Combs, Erin A. Frost, and Michelle F. Eble describe a science-writing course taught at East Carolina University, “a doctoral/research institution with about 27,000 students, serv[ing] a largely rural population” (132). The course has been taught by the English department since 1967 as an upper-level option for students in the sciences, English, and business and technical communication. The course also acts as an option for students to fulfill the requirement to take two writing-intensive (WI) courses, one in the major; as a result, it serves students in areas like biology and chemistry. The two to three sections per semester offered by English are generally taught by “full-time teaching instructors” and sometimes by tenured/tenure-track faculty in technical and professional communication (132).

Combs et al. detail iterations of the course taught by Frost and Eble, who had not taught it before. English graduate student D. Shane Combs contributed as a peer mentor. Inclusion of the peer mentor as well as the incorporation of university-wide writing outcomes into the course-specific outcomes resulted from a Quality Enhancement Plan underway at the university as a component of its reaccreditation. This plan included a special focus on writing instruction, for example, a Writing Mentors program that funded peer-mentor support for WI instruction. Combs, who was sponsored by the English department, brought writing-center experience as well as learning from “a four-hour professional development session” to his role (133).

Drawing on work by Donna J. Haraway, Sandra Harding, and James C. Wilson, Frost and Eble’s collaboratively designed sections of the course were intent “on moving students into a rhetorical space where they can explore the socially constructed nature of science, scientific rhetoric, and scientific traditions” (134). In their classes, the instructors announced that they would be teaching from “an ‘apparent feminist’ perspective,” in Frost’s case, and from “a critical gender studies approach” in Eble’s (134-35). The course required three major assignments: field research on scientific writing venues in an area of the student’s choice; “a complete scientific article” for one of the journals that had been investigated; and a conversion of the scientific article into a general-audience article appropriate for CNN.com (135). A particular goal of these assignments was to provoke cognitive dissonance in order to raise questions of how scientific information can be transmitted “in responsible ways” as students struggled with the selectivity needed for general audiences (135).

Other components of students’ grades were class discussion, a “scripted oral debate completed in small groups,” and a “personal process journal.” In addition, students participated in “cross-class peer review,” in which students from Frost’s class provided feedback on the lay articles from Eble’s class and vice versa (136).

In their Critical Reflection, Combs et al. consider three components of the class that provided particular insights: the collaboration in course design; the inclusion of the peer mentor; and the cross-class peer review (137). Collaboration not only allowed the instructors to build on each other’s strengths and experiences, it also helped them analyze other aspects of the class. Frost and Eble determined that differences in their own backgrounds and teaching styles impacted student responses to assignments. For example, Eble’s experience on an Institutional Review Board influenced her ability to help students think beyond the perception that writing for varied audiences required them to “dumb down” their scientific findings (137).

Much discussion centers on what the researchers learned from the cross-class peer review about students’ dissonance in producing the CNN.com lay article. Students in the two classes addressed this challenge quite differently. Frost’s students resisted the complexity that Eble’s students insisted on sustaining in their revisions of their scientific article, while students in Eble’s class criticized the submissions from Frost’s students as “too simple.” The authors write that “even though students were presented with the exact same assignment prompt, they received different messages about their intended audiences” (138).

The researchers credit Combs’s presence as a peer mentor in Frost’s class for the students’ ability to revise more successfully for non-specialized audiences. They argue that he provided a more immediate outside audience at the same time that he promoted a sense of community and identification that encouraged students to make difficult rhetorical decisions (138-39). His feedback to the instructors helped them recognize the value of the cross-class peer review despite the apparent challenges it presented. In his commentary, he discusses how receiving the feedback from the other class prompted one student to achieve a “successful break from a single-form draft writing and in-class peer review” (Combs, qtd. in Combs et al. 140). He quotes the student’s perception that everyone in her own class “had the same understanding of what the paper was supposed to be” and her sense that the disruption of seeing the other class’s very different understanding fueled a complete revision that made her “happier with [her] actual article” (140). The authors conclude that both the contributions of the peer mentor and the dissonance created by the very different understandings of audience led to increased critical reflection (140), in particular, in Combs’s words, the recognition that

there are often spaces in writing not filled by right-and-wrong choices, but by creating drafts, receiving feedback, and ultimately making the decision to go in a chosen direction. (140)

In future iterations, in addition to retaining the cross-class peer review and the peer-mentor presence, the instructors propose equalizing the amount of feedback the classes receive, especially since receiving more feedback rather than less pushes students to “prioritize” and hence develop important revision strategies (141). They also plan to simplify the scientific-article assignment, which Frost deemed “too much” (141). An additional course-design revision involves creating a lay article from a previously published scientific paper in order to prepare students for the “affective impact” (141) of making radical changes in work to which they are already deeply committed. A final change involves converting the personal journal to a social-media conversation to develop awareness of the exigencies of public discussion of science (141).


Cox, Black, Heney, and Keith. Responding to Students Online. TETYC, May 2015. Posted 07/22/15.

Cox, Stephanie, Jennifer Black, Jill Heney, and Melissa Keith. “Promoting Teacher Presence: Strategies for Effective and Efficient Feedback to Student Writing Online.” Teaching English in the Two-Year College 42.4 (2015): 376-91. Web. 14 July 2015.

Stephanie Cox, Jennifer Black, Jill Heney, and Melissa Keith address the challenges of responding to student writing online. They note the special circumstances attendant on online teaching, in which students lack the cues provided by body language and verbal tone when they interpret instructor comments (376). Students in online sections, the authors write, do not have easy access to clarification and individual direction, and may not always take the initiative in following up when their needs aren’t met (377). These features of the online learning environment require teachers to develop communicative skills especially designed for online teaching.

To overcome the difficulty teachers may find in building a community among students with whom they do not interact face-to-face, the authors draw on the Community of Inquiry framework developed by D. Randy Garrison. This model emphasizes presence as a crucial rhetorical dimension in community building, distinguishing between “social presence,” “cognitive presence,” and “teacher presence” as components of a classroom in which teachers can create effective learning environments.

Social presence indicates the actions and rhetorical choices that give students a sense of “a real person online,” in the words of online specialists Rena M. Palloff and Keith Pratt (qtd. in Cox et al. 377). Moves that allow the teacher to interact socially through the response process decrease the potential for students to “experience isolation and a sense of disconnection” (377). Cognitive presence involves activities that contribute to the “creation of meaning” in the classroom as students explore concepts and ideas. both individually and as part of the community. Through teacher presence, instructors direct learning and disseminate knowledge, setting the stage for social and cognitive interaction (377).

In the authors’ view, developing effective social, cognitive, and teacher presence requires attention to the purpose of particular responses depending on the stage of the writing process, to the concrete elements of delivery, and to the effects of different choices on the instructor’s workload.

Citing Peter Elbow’s discussion of “ranking and evaluation,” the authors distinguish between feedback that assigns a number on a scale and feedback that encourages ongoing development of an idea or draft (376-79; emphasis original). Ranking during early stages may allow teachers to note completion of tasks; evaluation, conversely, involves “communication” that allows students to move forward fruitfully on a project (379).

The authors argue that instructors in digital environments should follow James E. Porter’s call for “resurrecting the neglected rhetorical canon of delivery” (379). Digital teaching materials provide opportunities like emoticons for emulating the role of the body that is important to classical theories of delivery; such tools can emphasize emotions that can be lost in online exchanges.

Finally, the authors note the tendency for responding online to grow into an overwhelming workload. “Limit[ing] their comments” is a “healthy” practice that teachers need not regret. Determining what kind of feedback is most appropriate to a given type of writing is important in setting these limits, as is making sure that students understand that different tasks will elicit different kinds of response (379-80).

The authors explore ways to address informal writing without becoming overwhelmed. They point out that teachers often don’t respond in writing to informal work in face-to-face classrooms and thus do not necessarily need to do so in online classes. They suggest that “generalized group comments” can effectively point out shared trends in students’ work, present examples, and enhance teacher presence. Such comments may be written, but can also be “audio” or “narrated screen capture” that both supply opportunities for generating social and teacher presence while advancing cognitive goals.

They recommend making individual comments on informal work publicly, posting only “one formative point per student while encouraging students to read all of the class postings and the instructor responses” (382). Students thus benefit from a broader range of instruction. Individual response is important early and in the middle of the course to create and reinforce students’ connections with the instructor; it is also important during the early development of paper ideas when some students may need “redirect[ion]” (382).

The authors also encourage “feedback-free spaces,” especially for tentative early drafting; often making such spaces visible to all students gives students a sense of audience while allowing them to share ideas and experience how the writing process often unfolds through examples of early writing “in all its imperfection” (383).

Cox et al. suggest that feedback on formal assignments should embrace Richard Straub’s “six conversational response strategies” (383), which focus on informal language, specific connections to the student’s work, and maintaining an emphasis on “help or guidance” (384). The authors discuss five response methods for formal tasks. In their view, rubrics work best when free of complicated technical language and when integrated into a larger conversation about the student’s writing (385-86). Cox et al. recommend using the available software programs for in-text comments, which students find more legible and which allow instructors to duplicate responses when appropriate (387). The authors particularly endorse “audio in-text comments,” which not only save time but also allow the students to hear the voice of an embodied person, enhancing presence (387). Similarly, they recommend generating holistic end-comments via audio, with a highlighting system to tie the comments back to specific moments in the student’s text (387-88). Synchronous conferences, facilitated by many software options including screen-capture tools, can replace face-to-face conferences, which may not work for online students. The opportunity to talk not only about writing but also about other aspects of the student’s environment further build social, cognitive, and teacher presence (388).

The authors offer tables delineating the benefits and limitations of responses both to informal and formal writing, indicating the kind of presence supported by each and options for effective delivery (384, 389).