College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Litterio, Lisa M. Contract Grading: A Case Study. J of Writing Assessment, 2016. Posted 04/20/2017.

Litterio, Lisa M. “Contract Grading in a Technical Writing Classroom: A Case Study.” Journal of Writing Assessment 9.2 (2016). Web. 05 Apr. 2017.

In an online issue of the Journal of Writing Assessment, Lisa M. Litterio, who characterizes herself as “a new instructor of technical writing,” discusses her experience implementing a contract grading system in a technical writing class at a state university in the northeast. Her “exploratory study” was intended to examine student attitudes toward the contract-grading process, with a particular focus on how the method affected their understanding of “quality” in technical documents.

Litterio’s research into contract grading suggests that it can have the effect of supporting a process approach to writing as students consider the elements that contribute to an “excellent” response to an assignment. Moreover, Litterio contends, because it creates a more democratic classroom environment and empowers students to take charge of their writing, contract grading also supports critical pedagogy in the Freirean model. Litterio draws on research to support the additional claim that contract grading “mimic[s] professional practices” in that “negotiating and renegotiating a document” as students do in contracting for grades is a practice that “extends beyond the classroom into a workplace environment.”

Much of the research she reports dates to the 1970s and 1980s, often reflecting work in speech communication, but she cites as well models from Ira Shor, Jane Danielewicz and Peter Elbow, and Asao Inoue from the 2000s. In a common model, students can negotiate the quantity of work that must be done to earn a particular grade, but the instructor retains the right to assess quality and to assign the final grade. Litterio depicts her own implementation as a departure from some of these models in that she did make the final assessment, but applied criteria devised collaboratively by the students; moreover, her study differs from earlier reports of contract grading in that it focuses on the students’ attitudes toward the process.

Her Fall 2014 course, which she characterizes as a service course, enrolled twenty juniors and seniors representing seven majors. Neither Litterio nor any of the students were familiar with contract grading, and no students withdrew on learning from the syllabus and class announcements of Litterio’s grading intentions. At mid-semester and again at the end of the course, Litterio administered an anonymous open-ended survey to document student responses. Adopting the role of “teacher-researcher,” Litterio hoped to learn whether involvement in the generation of criteria led students to a deeper awareness of the rhetorical nature of their projects, as well as to “more involvement in the grading process and more of an understanding of principles discussed in technical writing, such as usability and document design.”

Litterio shares the contract options, which allowed students to agree to produce a stated number of assignments of either “excellent,” “great,” or “good” quality, an “entirely positive grading schema” that draws on Frances Zak’s claim that positive evaluations improved student “authority over their writing.”

The criteria for each assignment were developed in class discussion through an open voting process that resulted in general, if not absolute, agreement. Litterio provides the class-generated criteria for a resumé, which included length, format, and the expectations of “specific and strong verbs.” As the instructor, Litterio ultimately decided whether these criteria were met.

Mid-semester surveys indicated that students were evenly split in their preferences for traditional grading models versus the contract-grading model being applied. At the end of the semester, 15 of the 20 students expressed a preference for traditional grading.

Litterio coded the survey responses and discovered specific areas of resistance. First, some students cited the unfamiliarity of the contract model, which made it harder for them to “track [their] own grades,” in one student’s words. Second, the students noted that the instructor’s role in applying the criteria did not differ appreciably from instructors’ traditional role as it retained the “bias and subjectivity” the students associated with a single person’s definition of terms like “strong language.” Students wrote that “[i]t doesn’t really make a difference in the end grade anyway, so it doesn’t push people to work harder,” and “it appears more like traditional grading where [the teacher] decide[s], not us.”

In addition, students resisted seeing themselves and their peers as qualified to generate valid criteria and to offer feedback on developing drafts. Students wrote of the desire for “more input from you vs. the class,” their sense that student-generated criteria were merely “cosmetics,” and their discomfort with “autonomy.” Litterio attributes this resistance to the role of expertise to students’ actual novice status as well as to the nature of the course, which required students to write for different discourse communities because of their differing majors. She suggests that contract grading may be more appropriate for writing courses within majors, in which students may be more familiar with the specific nature of writing in a particular discipline.

However, students did confirm that the process of generating criteria made them more aware of the elements involved in producing exemplary documents in the different genres. Incorporating student input into the assessment process, Litterio believes, allows instructors to be more reflective about the nature of assessment in general, including the risk of creating a “yes or no . . . dichotomy that did not allow for the discussions and subjectivity” involved in applying a criterion. Engaging students throughout the assessment process, she contends, provides them with more agency and more opportunity to understand how assessment works. Student comments reflect an appreciation of having a “voice.”

This study, Litterio contends, challenges the assumption that contract grading is necessarily “more egalitarian, positive, [and] student-centered.” The process can still strike students as biased and based entirely on the instructor’s perspective, she found. She argues that the reflection on the relationship between student and teacher roles enabled by contract grading can lead students to a deeper understanding of “collective norms and contexts of their actions as they enter into the professional world.”


Wooten et al. SETs in Writing Classes. WPA, Fall 2016. Posted 02/11/2016.

Wooten, Courtney Adams, Brian Ray, and Jacob Babb. “WPAs Reading SETs: Toward an Ethical and Effective Use of Teaching Evaluations.” Journal of the Council of Writing Program Administrators 40.1 (2016): 50-66. Print.

Courtney Adams Wooten, Brian Ray, and Jacob Babb report on a survey examining the use of Student Evaluations of Teaching (SETs) by writing program administrators (WPAs).

According to Wooten et al., although WPAs appear to be dissatisfied with the way SETs are generally used and have often attempted to modify the form and implementation of these tools for evaluating teaching, they have done so without the benefit of a robust professional conversation on the issue (50). Noting that much of the research they found on the topic came from areas outside of writing studies (63), the authors cite a single collection on using SETs in writing programs by Amy Dayton that recommends using SETs formatively and as one of several measures to assess teaching. Beyond this source, they cite “the absence of research on SETs in our discipline” as grounds for the more extensive study they conducted (51).

The authors generated a list of WPA contact information at more than 270 institutions, ranging from two-year colleges to private and parochial schools to flagship public universities, and solicited participation via listservs and emails to WPAs (51). Sixty-two institutions responded in summer 2014 for a response rate of 23%; 90% of the responding institutions were four-year institutions.

Despite this low response rate, the authors found the data informative (52). They note that the difficulty in recruiting faculty responses from two-year colleges may have resulted from problems in identifying responsible WPAs in programs where no specific individual directed a designated writing program (52).

Their survey, which they provide, asked demographic and logistical questions to establish current practice regarding SETs at the responding institutions as well as questions intended to elicit WPAs’ attitudes toward the ways SETs affected their programs (52). Open-ended questions allowed elaboration on Likert-scale queries (52).

An important recurring theme in the responses involved the kinds of authority WPAs could assert over the type and use of SETs at their schools. Responses indicated that the degree to which WPAs could access student responses and could use them to make hiring decisions varied greatly. Although 76% of the WPAs could read SETS, a similar number indicated that department chairs and other administrators also examined the student responses (53). For example, in one case, the director of a first-year-experience program took primary charge of the evaluations (53). The authors note that WPAs are held accountable for student outcomes but, in many cases, cannot make personnel decisions affecting these outcomes (54).

Wooten et al. report other tensions revolving around WPAs’ authority over tenured and tenure-track faculty; in these cases, surveyed WPAs often noted that they could not influence either curricula nor course assignments for such faculty (54). Many WPAs saw their role as “mentoring” rather than “hiring/firing.” The WPAs were obliged to respond to requests from external authorities to deal with poor SETs (54); the authors note a “tacit assumption . . . that the WPA is not capable of interpreting SET data, only carrying out the will of the university” (54). They argue that “struggles over departmental governance and authority” deprive WPAs of the “decision-making power” necessary to do the work required of them (55).

The survey “revealed widespread dissatisfaction” about the ways in which SETs were administered and used (56). Only 13% reported implementing a form specific to writing; more commonly, writing programs used “generic” forms that asked broad questions about the teacher’s apparent preparation, use of materials, and expertise (56). The authors contend that these “indirect” measures do not ask about practices specific to writing and may elicit negative comments from students who do not understand what kinds of activities writing professionals consider most beneficial (56).

Other issues of concern include the use of online evaluations, which provide data that can be easily analyzed but result in lower participation rates (57). Moreover, the authors note, WPAs often distrust numerical data without the context provided by narrative responses, to which they may or may not have access (58).

Respondents also noted confusion or uncertainty about how an institution determines what constitutes a “good” or “poor” score. Many of these decisions are determined by comparing an individual teacher’s score to a departmental or university-wide average, with scores below the average signaling the need for intervention. The authors found evidence that even WPAs may fail to recognize that lower scores can be influenced not just by the grade the student expects but also by gender, ethnicity, and age, as well as whether the course is required (58-59).

Wooten et al. distinguish between “teaching effectiveness,” a basic measure of competence, and “teaching excellence,” practices and outcomes that can serve as benchmarks for other educators (60). They note that at many institutions, SETs appear to have little influence over recognition of excellence, for example through awards or commendations; classroom observations and teaching portfolios appear to be used more often for these determinations. SETs, in contrast, appear to have a more “punitive” function (61), used more often to single out teachers who purportedly fall short in effectiveness (60).

The authors note the vulnerability of contingent and non-tenure-track faculty to poorly implemented SETs and argue that a climate of fear occasioned by such practices can lead to “lenient grading and lowered demands” (61). They urge WPAs to consider the ethical implications of the use of SETs in their institutions.

Recommendations include “ensuring high response rates” through procedures and incentives; clarifying and standardizing designations of good and poor performance and ensuring transparency in the procedures for addressing low scores; and developing forms specific to local conditions and programs (61-62). Several of the recommendations concern increasing WPA authority over hiring and mentoring teachers, including tenure-track and tenured faculty. Wooten et al. recommend that all teachers assigned to writing courses administer writing-specific evaluations and be required to act on the information these forms provide; the annual-report process can allow tenured faculty to demonstrate their responsiveness (62).

The authors hope that these recommendations will lead to a ‘disciplinary discussion” among WPAs that will guide “the creation of locally appropriate evaluation forms that balance the needs of all stakeholders—students, teachers, and administrators” (63).


Gallagher, Chris W. Behaviorism as Social-Process Pedagogy. Dec. CCC. Posted 01/12/2017.

Gallagher, Chris W. “What Writers Do: Behaviors, Behaviorism, and Writing Studies.” College Composition and Communication 68.2 (2016): 238-65. Web. 12 Dec. 2016.

Chris W. Gallagher provides a history of composition’s relationship with behaviorism, arguing that this relationship is more complex than commonly supposed and that writing scholars can use the connections to respond to current pressures imposed by reformist models.

Gallagher notes the efforts of many writing program administrators (WPAs) to articulate professionally informed writing outcomes to audiences in other university venues, such as general-education committees (238-39). He reports that such discussions often move quickly from compositionists’ focus on what helps students “writ[e] well” to an abstract and universal ideal of “good writing” (239).

This shift, in Gallagher’s view, encourages writing professionals to get caught up in “the work texts do” in contrast to the more important focus on “the work writers do” (239; emphasis original). He maintains that “the work writers do” is in fact an issue of behaviors writers exhibit and practice, and that the resistance to “behaviorism” that characterizes the field encourages scholars to lose sight of the fact that the field is “in the behavior business; we are, and should be, centrally concerned with what writers do” (240; emphasis original).

He suggests that “John Watson’s behavioral ‘manifesto’—his 1913 paper, ‘Psychology as the Behaviorist Views It’” (241) captures what Gallagher sees as the “general consensus” of the time and a defining motivation for behaviorism: a shift away from “fuzzy-headed . . . introspective analysis” to the more productive process of “study[ing] observable behaviors” (241). Gallagher characterizes many different types of behaviorism, ranging from those designed to actually control behavior to those hoping to understand “inner states” through their observable manifestations (242).

One such productive model of behaviorism, in Gallagher’s view, is that of B. F. Skinner in the 1960s and 1970s. Gallagher argues that Skinner emphasized not “reflex behaviors” like those associated with Pavlov but rather “operant behaviors,” which Gallagher, citing psychologist John Staddon, characterizes as concerned with “the ways in which human (and other animal) behavior operates in its environment and is guided by its consequences” (242).

Gallagher contends that composition’s resistance to work like Skinner’s was influenced by views like that of James A. Berlin, for whom behaviorism was aligned with “current-traditional rhetoric” because it was deemed an “objective rhetoric” that assumed that writing was merely the process of conveying an external reality (243). The “epistemic” focus and “social turn” that emerged in the 1980s, Gallagher writes, generated resistance to “individualism and empiricism” in general, leading to numerous critiques of what were seen as behaviorist impulses.

Gallagher attributes much tension over behaviorism in composition to the influx of government funding in the 1960s designed to “promote social efficiency through strategic planning and accountability” (248). At the same time that this funding rewarded technocratic expertise, composition focused on “burgeoning liberation movements”; in Gallagher’s view, behaviorism erred by falling on the “wrong” or “science side” of this divide (244). Gallagher chronicles efforts by the National Council of Teachers of English and various scholars to arrive at a “détente” that could embrace forms of accountability fueled by behaviorism, such as “behavioral objectives” (248), while allowing the field to “hold on to its humanist core” (249).

In Gallagher’s view, scholars who struggled to address behaviorism such as Lynn Z. and Martin Bloom moved beyond mechanistic models of learning to advocate many features of effective teaching recognized today, such as a resistance to error-oriented pedagogy, attention to process, purposes, and audiences, and provision of “regular, timely feedback” (245-46). Negative depictions of behaviorism, Gallagher argues, in fact neglect the degree to which, in such scholarship, behaviorism becomes “a social-process pedagogy” (244; emphasis original).

In particular, Gallagher argues that “the most controversial behaviorist figure in composition history,” Robert Zoellner (246), has been underappreciated. According to Gallagher, Zoellner’s “talk-write” pedagogy was a corrective for “think-write” models that assumed that writing merely conveyed thought, ignoring the possibility that writing and thinking could inform each other (246). Zoellner rejected reflex-driven behaviorism that predetermined stimulus-response patterns, opting instead for an operant model in which objectives followed from rather than controlled students’ behaviors, which should be “feely emitted” (Zoellner, qtd. in Gallagher 250) and should emerge from “transactional” relationships among teachers and students in a “collaborative,” lab-like setting in which teachers interacted with students and modeled writing processes (247).

The goal, according to Gallagher, was consistently to “help students develop robust repertoires of writing behaviors to help them adapt to the different writing situations in which they would find themselves” (247). Gallagher contends that Zoellner advocated teaching environments in which

[behavioral objectives] are not codified before the pedagogical interaction; . . . are rooted in the transactional relationship between teachers and students; . . . are not required to be quantifiably measurable; and . . . operate in a humanist idiom. (251).

Rejected in what Martin Nystrand denoted “the social 1980s” (qtd. in Gallagher 251), as funding for accountability initiatives withered (249), behaviorism did attract the attention of Mike Rose. His chapter in Why Writers Can’t Write and that of psychology professor Robert Boice attended to the ways in which writers relied on specific behaviors to overcome writer’s block; in Gallagher’s view, Rose’s understanding of the short-comings of overzealous behaviorism did not prevent him from taking “writers’ behaviors qua behaviors extremely seriously” (253).

The 1990s, Gallagher reports, witnessed a moderate revival of interest in Zoellner, who became one of the “unheard voices” featured in new histories of the field (254). Writers of these histories, however, struggled to dodge behaviorism itself, hoping to develop an empiricism that would not insist on “universal laws and objective truth claims” (255). After these efforts, however, Gallagher reports that the term faded from view, re-emerging only recently in Maja Joiwind Wilson’s 2013 dissertation as a “repressive” methodology exercised as a form of power (255).

In contrast to these views, Gallagher argues that “behavior should become a key term in our field” (257). Current pressures to articulate ways of understanding learning that will resonate with reformers and those who want to impose rigid measurements, he contends, require a vocabulary that foregrounds what writers actually do and frames the role of teachers as “help[ing] students expand their behavioral repertoires” (258; emphasis original). This vocabulary should emphasize the social aspects of all behaviors, thereby foregrounding the fluid, dynamic nature of learning.

In his view, such a vocabulary would move scholars beyond insisting that writing and learning “operate on a higher plane than that of mere behaviors”; instead, it would generate “better ways of thinking and talking about writing and learning behaviors” (257; emphasis original). He recommends, for example, creating “learning goals” instead of “outcomes” because such a shift discourages efforts to reduce complex activities to pre-determined, reflex-driven steps toward a static result (256). Scholars accustomed to a vocabulary of “processes, practices, and activities” can benefit from learning as well to discuss “specific, embodied, scribal behaviors” and the environments necessary if the benefits accruing to these behaviors are to be realized (258).

 


Patchan and Shunn. Effects of Author and Reviewer Ability in Peer Feedback. JoWR 2016. Posted 11/25/2016.

Patchan, Melissa M., and Christian D. Shunn. “Understanding the Effects of Receiving Peer Feedback for Text Revision: Relations between Author and Reviewer Ability.” Journal of Writing Research 8.2 (2016): 227-65. Web. 18 Nov. 2016. doi: 10.17239/jowr-2016.08.02.03

Melissa M. Patchan and Christian D. Shunn describe a study of the relationship between the abilities of writers and peer reviewers in peer assessment. The study asks how the relative ability of writers and reviewers influences the effectiveness of peer review as a learning process.

The authors note that in many content courses, the time required to provide meaningful feedback encourages many instructors to turn to peer assessment (228). They cite studies suggesting that in such cases, peer response can be more effective than teacher response because, for example, students may actually receive more feedback, the feedback may be couched in more accessible terms, and students may benefit from seeing models and new strategies (228-29). Still, studies find, teachers and students both question the efficacy of peer assessment, with students stating that the quality of review depends largely on the abilities of the reviewer (229).

Patchan and Shunn distinguish between the kind of peer review characteristic of writing classrooms, which they describe as “pair or group-based face-to-face conversations” emphasizing “qualitative feedback” and the type more often practiced in large content classes, which they see as more like “professional journal reviewing” that is “asynchronous, and written-based” (228). Their study addresses the latter format and is part of a larger study examining peer feedback in a widely required psychology class at a “large, public research university in the southeast” (234).

A random selection of 189 students wrote initial drafts in response to an assignment assessing media handling of a psychological study using criteria from the course textbook (236, 238). Students then received four drafts to review and were given a week to revise their own drafts in response to feedback. Participants used the “web-based peer assessment functions of turnitin.com” (237).

The researchers rated participants as high-ability writers using SAT scores and grades in their two first-year writing courses (236). Graduate rhetoric students also rated the first drafts. The protocol then included a “median split” to designate writers in binary fashion as either high- or low-ability. “High” authors were categorized as “high” reviewers. Patchan and Shunn note that there was a wide range in writer abilities but argue that, even though the “design decreases the power of this study,” such determinations were needed because of the large sample size, which in turn made the detection of “important patterns” likely (236-37). They feel that “a lower powered study was a reasonable tradeoff for higher external validity (i.e., how reviewer ability would typically be detected)” (237).

The authors describe their coding process in detail. In addition to coding initial drafts for quality, coders examined each reviewer’s feedback for its attention to higher-order problems and lower-order corrections (239-40). Coders also tabulated which comments resulted in revision as well as the “quality of the revision” (241). This coding was intended to “determine how the amount and type of comments varied as a function of author ability and reviewer ability” (239). A goal of the study was to determine what kinds of feedback triggered the most effective responses in “low” authors (240).

The study was based on a cognitive model of writing derived from the updated work of Linda Flower and John R. Hayes, in which three aspects of writing/revision follow a writer’s review of a text: problem detection, problem diagnosis, and strategy selection for solving the diagnosed problems (230-31). In general, “high” authors were expected to produce drafts with fewer initial problems and to have stronger reading skills that allowed them to detect and diagnose more problems in others’ drafts, especially “high-level” problems having to do with global issues as opposed to issues of surface correctness (230). High ability authors/reviewers were also assumed to have a wider repertoire of solution strategies to suggest for peers and to apply to their own revisions (233). All participants received a rubric intended to guide their feedback toward higher-order issues (239).

Some of the researchers’ expectations were confirmed, but others were only partially supported or not supported (251). Writers whose test scores and grades categorized them as “high” authors did produce better initial drafts, but only by a slight margin. The researchers posit that factors other than ability may affect draft quality, such as interest or time constraints (243). “High” and “low” authors received the same number of comments despite differences in the quality of the drafts (245), but “high” authors made more higher-order comments even though they didn’t provide more solutions (246). “High” reviewers indicated more higher-order issues to “low” authors than to “high,” while “low” reviewers suggested the same number of higher-order changes to both “high” and “low” authors (246).

Patchan and Shunn considered the “implementation rate,” or number of comments on which students chose to act, and “revision quality” (246). They analyzed only comments that were specific enough to indicate action. In contrast to findings in previous studies, the expectation that better writers would make more and better revisions was not supported. Overall, writers acted on only 32% of the comments received and only a quarter of the comments resulted in improved drafts (248). Author ability did not factor into these results. Moreover, the ability of the reviewer had no effect on how many revisions were made or how effective they were (248).

It was expected that low-ability authors would implement more suggestions from higher-ability reviewers, but in fact, “low authors implemented more high-level criticism comments . . . from low reviewers than from high reviewers” (249). The quality of the revisions also improved for low-ability writers when the comments came from low-ability reviewers. The researchers conclude that “low authors benefit the most from feedback provided by low reviewers” (249).

Students acted on 41% of the low-level criticisms, but these changes seldom resulted in better papers (249).

The authors posit that rates of commenting and implementation may both be impacted by limits or “thresholds” on how much feedback a given reviewer is willing to provide and how many comments a writer is able or willing to act on (252, 253). They suggest that low-ability reviewers may explain problems in language that is more accessible to writers with less ability. Patchan and Shunn suggest that feedback may be most effective when it occurs within the student’s zone of proximal development, so that weaker writers may be helped most by peers just beyond them in ability rather than by peers with much more sophisticated skills (253).

In the authors’ view, that “neither author ability nor reviewer ability per se directly affected the amount and quality of revisions” (253) suggests that the focus in designing effective peer review processes should shift from how to group students to improving students’ ability to respond to comments (254). They recommend further research using more “direct” measures of writing and reviewing ability (254). A major conclusion from this study is that “[h]igher-ability students will likely revise their texts successfully regardless of who [they are] partnered with, but the lower-ability students may need feedback at their own level” (255).


1 Comment

West-Puckett, Stephanie. Digital Badging as Participatory Assessment. CE, Nov. 2016. Posted 11/17/2016.

Stephanie West-Puckett presents a case study of the use of “digital badges” to create a local, contextualized, and participatory assessment process that works toward social justice in the writing classroom.

She notes that digital badges are graphic versions of those earned by scouts or worn by members of military groups to signal “achievement, experience, or affiliation in particular communities” (130). Her project, begun in Fall 2014, grew out of Mozilla’s free Open Badging Initiative and the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC) that funded grants to four universities as well as to museums, libraries, and community partnerships to develop badging as a way of recognizing learning (131).

West-Puckett employed badges as a way of encouraging and assessing student engagement in the outcomes and habits of mind included in such documents as the Framework for Success in Postsecondary Writing, the Outcomes Statements for First-Year Composition produced by the Council of Writing Program Administrators, and her own institution’s outcomes statement (137). Her primary goal is to foster a “participatory” process that foregrounds the agency of teachers and students and recognizes the ways in which assessment can influence classroom practice. She argues that such participation in designing and interpreting assessments can address the degree to which assessment can drive bias and limit access and agency for specific groups of learners (129).

She reviews composition scholarship characterizing most assessments as “top-down” (127-28). In these practices, West-Puckett argues, instruments such as rubrics become “fetishized,” with the result that they are forced upon contexts to which they are not relevant, thus constraining the kinds of assignments and outcomes teachers can promote (134). Moreover, assessments often fail to encourage students to explore a range of literacies and do not acknowledge learners’ achievements within those literacies (130). More valid, for West-Puckett, are “hyperlocal” assessments designed to help teachers understand how students are responding to specific learning opportunities (134). Allowing students to join in designing and implementing assessments makes the learning goals visible and shared while limiting the power of assessment tools to marginalize particular literacies and populations (128).

West-Puckett contends that the multimodal focus in writing instruction exacerbates the need for new modes of assessment. She argues that digital badges partake of “the primacy of visual modes of communication,” especially for populations “whose bodies were not invited into the inner sanctum of a numerical and linguistic academy” (132). Her use of badges contributes to a form of assessment that is designed not to deride writing that does not meet the “ideal text” of an authority but rather to enlist students’ interests and values in “a dialogic engagement about what matters in writing” (133).

West-Puckett argues for pairing digital badging with “critical validity inquiry,” in which the impact of an assessment process is examined through a range of theoretical frames, such as feminism, Marxism, or queer or disability theory (134). This inquiry reveals assessment’s role in sustaining or potentially disrupting entrenched views of what constitutes acceptable writing by examining how such views confer power on particular practices (134-35).

In West-Puckett’s classroom in a “mid-size, rural university in the south” with a high percentage of students of color and first-generation college students (135), small groups of students chose outcomes from the various outcomes statements, developed “visual symbols” for the badges, created a description of the components and value of the outcomes for writing, and detailed the “evidence” that applicants could present from a range of literacy practices to earn the badges (137). West-Puckett hoped that this process would decrease the “disconnect” between her understanding of the outcomes and that of students (136), as well as engage students in a process that takes into account the “lived consequences of assessment” (141): its disparate impact on specific groups.

The case study examines several examples of badges, such as one using a compass to represent “rhetorical knowledge” (138). The group generated multimodal presentations, and applicants could present evidence in a range of forms, including work done outside of the classroom (138-39). The students in the group decided whether or not to award the badge.

West-Puckett details the degree to which the process invited “lively discussion” by examining the “Editing MVP” badge (139). Students defined editing as proofreading and correcting one’s own paper but visually depicted two people working together. The group refused the badge to a student of color because of grammatical errors but awarded it to another student who argued for the value of using non-standard dialogue to show people “‘speaking real’ to each other” (qtd. in West-Puckett 140). West-Puckett recounts the classroom discussion of whether editing could be a collaborative effort and when and in what contexts correctness matters (140).

In Fall 2015, West-Puckett implemented “Digital Badging 2.0” in response to her concerns about “the limited construct of good writing some students clung to” as well as how to develop “badging economies that asserted [her] own expertise as a writing instructor while honoring the experiences, viewpoints, and subject positions of student writers” (142). She created two kinds of badging activities, one carried out by students as before, the other for her own assessment purposes. Students had to earn all the student-generated badges in order to pass, and a given number of West-Puckett’s “Project Badges” to earn particular grades (143). She states that she privileges “engagement as opposed to competency or mastery” (143). She maintains that this dual process, in which her decision-making process is shared with the students who are simultaneously grappling with the concepts, invites dialogue while allowing her to consider a wide range of rhetorical contexts and literacy practices over time (144).

West-Puckett reports that although she found evidence that the badging component did provide students an opportunity to take more control of their learning, as a whole the classes did not “enjoy” badging (145). They expressed concern about the extra work, the lack of traditional grades, and the responsibility involved in meeting the project’s demands (145). However, in disaggregated responses, students of color and lower-income students viewed the badge component favorably (145). According to West-Puckett, other scholars have similarly found that students in these groups value “alternative assessment models” (146).

West-Puckett lays out seven principles that she believes should guide participatory assessment, foregrounding the importance of making the processes “open and accessible to learners” in ways that “allow learners to accept or refuse particular identities that are constructed through the assessment” (147). In addition, “[a]ssessment artifacts,” in this case badges, should be “portable” so that students can use them beyond the classroom to demonstrate learning (148). She presents badges as an assessment tool that can embody these principles.


1 Comment

Anderst et al. Accelerated Learning at a Community College. TETYC Sept. 2016. Posted 10/21/2016.

Anderst, Leah, Jennifer Maloy, and Jed Shahar. “Assessing the Accelerated Learning Program Model for Linguistically Diverse Developmental Writing Students.” Teaching English in the Two-Year College 44.1 (2016): 11-31. Web. 07 Oct. 2016.

Leah Anderst, Jennifer Maloy, and Jed Shahar report on the Accelerated Learning Program (ALP) implemented at Queensborough Community College (QCC), a part of the City University of New York system (CUNY) (11) in spring and fall semesters, 2014 (14).

In the ALP model followed at QCC, students who had “placed into remediation” simultaneously took both an “upper-level developmental writing class” and the “credit-bearing first-year writing course” in the two-course first-year curriculum (11). Both courses were taught by the same instructor, who could develop specific curriculum that incorporated program elements designed to encourage the students to see the links between the classes (13).

The authors discuss two “unique” components of their model. First, QCC students are required to take a high-stakes, timed writing test, the CUNY Assessment Test for Writing (CATW), for placement and to “exit remediation,” thus receiving a passing grade for their developmental course (15). Second, the ALP at Queensborough integrated English language learners (ELLs) with native English speakers (14).

Anderst et al. note research showing that in most institutions, English-as-a-second-language instruction (ESL) usually occurs in programs other than English or writing (14). The authors state that as the proportion of second-language learners increases in higher education, “the structure of writing programs often remains static” (15). Research by Shawna Shapiro, they note, indicates that ELL students benefit from “a non-remedial model” (qtd. in Anderst et al. 15), validating the inclusion of ELL students in the ALP at Queensborough.

Anderst et al. review research on the efficacy of ALP. Crediting Peter Adams with the concept of ALP in 2007 (11), the authors cite Adams’s findings that such programs have had “widespread success” (12), notably in improving “passing rate[s] of basic writing students,” improving retention, and accelerating progress through the first-year curriculum (12). Other research supports the claim that ALP students are more successful in first- and second-semester credit-bearing writing courses than developmental students not involved in such programs. although data on retention are mixed (12).

The authors note research on the drawbacks of high-stakes tests like the required exit-exam at QCC (15-16) but argue that strong student scores on this “non-instructor-based measurement” (26) provided legitimacy for their claims that students benefit from ALPs (16).

The study compared students in the ALP with developmental students not enrolled in the program. English-language learners in the program were compared both with native speakers in the program and with similar ELL students in specialized ESL courses. Students in the ALP classes were compared with the general cohort of students in the credit-bearing course, English 101. Comparisons were based on exit-exam scores and grades (17). Pass rates for the exam were calculated before and after “follow-up workshops” for any developmental student who did not pass the exam on the first attempt (17).

Measured by pass and withdrawal rates, Anderst et al. report, ALP students outperformed students in the regular basic writing course both before and after the workshops, with ELL students in particular succeeding after the follow-up workshops (17-18). They report a fall-semester pass rate of 84.62% for ELL students enrolled in the ALP after the workshop, compared to a pass rate of 43.4% for ELL students not participating in the program (19).

With regard to grades in English 101, the researchers found that for ALP students, the proportion of As was lower than for the course population as a whole (19). However, this difference disappeared “when the ALP cohort’s grades were compared to the non-ALP cohort’s grades with English 101 instructors who taught ALP courses” (19). Anderst et al. argue that comparing grades given to different cohorts by the same instructors is “a clearer measure” of student outcomes (19).

The study also included an online survey students took in the second iteration of the study in fall 2014, once at six weeks and again at fourteen weeks. Responses of students in the college’s “upper-level developmental writing course designed for ESL students” were compared to those of students in the ALP, including ELL students in this cohort (22).

The survey asked about “fit”—whether the course was right for the student—and satisfaction with the developmental course, as well as its value as preparation for the credit-bearing course (22). At six weeks, responses from ALP students to these questions were positive. However, in the later survey, agreement on overall sense of “fit” and the value of the developmental course dropped for the ALP cohort. For students taking the regular ESL course, however, these rates of agreement increased, often by large amounts (23).

Anderst et al. explain these results by positing that at the end of the semester, ALP students, who were concurrently taking English 101, had come to see themselves as “college material” rather than as remedial learners and no longer felt that the developmental course was appropriate for their ability level (25). Students in one class taught by one of the researchers believed that they were “doing just as well, if not better in English 101 as their peers who were not also in the developmental course” (25). The authors consider this shift in ALP students’ perceptions of themselves as capable writers an important argument for ALP and for including ELL students in the program (25).

Anderst et al. note that in some cases, their sample was too small for results to rise to statistical significance, although final numbers did allow such evaluation (18). They also note that the students in the ALP sections whose high-school GPAs were available had higher grades than the “non-ALP” students (20). The ALP cohort included only students “who had only one remedial need in either reading or writing”; students who placed into developmental levels in both areas found the ALP work “too intensive” (28n1).

The authors recommend encouraging more open-ended responses than they received to more accurately account for the decrease in satisfaction in the second survey (26). They conclude that “they could view this as a success” because it indicated the shift in students’ views of themselves:

This may be particularly significant for ELLs within ALP because it positions them both institutionally and psychologically as college writers rather than isolating them within an ESL track. (26)


Zuidema and Fredricksen. Preservice Teachers’ Use of Resources. August RTE. Posted 09/25/2016.

Zuidema, Leah A., and James E. Fredricksen. “Resources Preservice Teachers Use to Think about Student Writing.” Research in the Teaching of English 51.1 (2016): 12-36. Print.

Leah A. Zuidema and James E. Fredricksen document the resources used by students in teacher-preparation programs. The study examined transcripts collected from VoiceThread discussions among 34 preservice teachers (PSTs) (16). The PSTs reviewed and discussed papers provided by eighth- and ninth-grade students in Idaho and Indiana (18).

Zuidema and Fredricksen define “resource” as “an aid or source of evidence used to help support claims; an available supply that can be drawn upon when needed” (15). They intend their study to move beyond determining what writing teachers “get taught” to discovering what kinds of resources PSTs actually use in developing their theories and practices for K-12 writing classrooms (13-14).

The literature review suggests that the wide range of concepts and practices presented in teacher-preparation programs varies depending on local conditions and is often augmented by students’ own educational experiences (14). The authors find very little systematic study of how beginning teachers actually draw on the methods and concepts their training provides (13).

Zuidema and Fredricksen see their study as building on prior research by systematically identifying the resources teachers use and assigning them to broad categories to allow a more comprehensive understanding of how teachers use such sources to negotiate the complexities of teaching writing (15-16).

To gather data, the researchers developed a “community of practice” by building their methods courses around a collaborative project focusing on assessing writing across two different teacher-preparation programs (16-17). Twenty-six Boise State University PSTs and 8 from a small Christian college, Dordt, received monthly sets of papers from the eighth and ninth graders, which they then assessed individually and with others at their own institutions.

The PSTs then worked in groups through VoiceThread to respond to the papers in three “rounds,” first “categoriz[ing]” the papers according to strengths and weaknesses; then categorizing and prioritizing the criteria they relied on; and finally “suggest[ing] a pedagogical plan of action” (19). This protocol did not explicitly ask PSTs to name the resources they used but revealed these resources via the transcriptions (19).

The methods courses taught by Zuidema and Fredricksen included “conceptual tools” such as “guiding frameworks, principles, and heuristics,” as well as “practical tools” like “journal writing and writer’s workshop” (14). PSTs read professional sources and participated in activities that emphasized the value of sharing writing with students (17). Zuidema and Fredricksen contend that a community of practice in which professionals explain their reasoning as they assess student writing encourages PSTs to “think carefully about theory-practice connections” (18).

In coding the VoiceThread conversations, the researchers focused on “rhetorical approaches to composition” (19), characterized as attention to “arguments and claims . . . , evidence and warrants,” and “sources of support” (20). They found five categories of resources PSTs used to support claims about student writing:

  • Understanding of students and student writing (9% of instances)
  • Knowledge of the context (10%)
  • Colleagues (11%)
  • PSTs’ roles as writers, readers, and teachers (17%)
  • PSTs’ ideas and observations about writing (54%) (21)

In each case, Zuidema and Fredricksen developed subcategories. For example, “Understanding of students and student writing” included “Experience as a student writer” and “Imagining students and abilities,” while “Colleagues” consisted of “Small-group colleagues,” “More experienced teachers,” “Class discussion/activity,” and “Professional reading” (23).

Category 1, “Understanding of students and student writing,” was used “least often,” with PSTs referring to their own student-writing experiences only six times out of 435 recorded instances (24). The researchers suggest that this category might have been used more had the PSTs been able to interact with the students (24). They see “imagining” how students are reacting to assignments important as a “way [teachers] can develop empathy” and develop interest in how students understand writing (24).

Category 2, “Knowledge of Context as a Resource,” was also seldom used. Those who did refer to it tended to note issues involving what Zuidema and Fredricksen call GAPS: rhetorical awareness of “genre, audience, purpose, and situation of the writing” (25). Other PSTs noted the role of the prompt in inviting strong writing. The researchers believe these types of awarenesses encourage more sophisticated assessment of student work (25).

The researchers express surprise that Category 3, “Colleagues,” was used so seldom (26). Colleagues in the small groups were cited most often, but despite specific encouragement to do so, several groups did not draw on this resource. Zuidema and Fredricksen note that reference to the resource increased through the three rounds. Also surprising was the low rate of reference to mentors and experienced teachers, to class discussion, activities, and assignments: Only one participant mentioned a required “professional reading” as a resource (27). Noting that the PSTs may have used concepts from mentors and class assignments without explicitly naming them, the authors note prior research suggesting that reference to outside sources can be perceived as undercutting the authority conferred by experience (27).

In Category 4, “Roles as Resources,” Zuidema and Fredricksen note that PSTs were much more likely to draw on their roles as readers or teachers than as writers (28). Arguing that a reader perspective augured an awareness of the importance of audience, the researchers note that most PSTs in their study perceived their own individual reader responses as most pertinent, suggesting the need to emphasize varied perspectives readers might bring to a text (28).

Fifty-four percent of the PSTs references invoked “Writing as a Resource” (29). Included in this category were “imagined ideal writing,” “comparisons across student writing,” “holistic” references to “whole texts,” and “excerpts” (29-31). In these cases, PSTs’ uses of the resources ranged from “a rigid, unrhetorical view of writing” in which “rules” governed assessment (29) to a more effective practice that “connected [student writing] with a rhetorical framework” (29). For example, the use of excerpts could be used for “keeping score” on “checklists” or as a means of noting patterns and suggesting directions for teaching (31). Comparisons among students and expectations for other students at similar ages, Zuidema and Fredricksen suggest, allowed some PSTs to reflect on developmental issues, while holistic evaluation allowed consideration of tone, audience, and purpose (30).

Zuidema and Fredricksen conclude that in encouraging preservice teachers to draw on a wide range of resources, “exposure was not enough” (32), and “[m]ere use is not the goal” (33). Using their taxonomy as a teaching tool, they suggest, may help PSTs recognize the range of resources available to them and “scaffold their learning” (33) so that they will be able to make informed decisions when confronted with the multiple challenges inherent in today’s diverse and sometimes “impoverished” contexts for teaching writing (32).


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).


Comer and White. MOOC Assessment. CCC, Feb. 2016. Posted 04/18/2016.

Comer, Denise K., and Edward M. White. “Adventuring into MOOC Writing Assessment: Challenges, Results, and Possibilities.” College Composition and Communication 67.3 (2016): 318-59. Print.

Denise K. Comer and Edward M. White explore assessment in the “first-ever first-year-writing MOOC,” English Composition I: Achieving Expertise, developed under the auspices of the Bill & Melinda Gates Foundation, Duke University, and Coursera (320). Working with “a team of more than twenty people” with expertise in many areas of literacy and online education, Comer taught the course (321), which enrolled more than 82,000 students, 1,289 of whom received a Statement of Accomplishment indicating a grade of 70% or higher. Nearly 80% of the students “lived outside the United States” and for a majority, English was not the first language, although 59% of these said they were “proficient or fluent in written English” (320). Sixty-six percent had bachelor’s or master’s degrees.

White designed and conducted the assessment, which addressed concerns about MOOCs as educational options. The authors recognize MOOCs as “antithetical” (319) to many accepted principles in writing theory and pedagogy, such as the importance of interpersonal instructor/student interaction (319), the imperative to meet the needs of a “local context” (Brian Huot, qtd. in Comer and White 325) and a foundation in disciplinary principles (325). Yet the authors contend that as “MOOCs are persisting,” refusing to address their implications will undermine the ability of writing studies specialists to influence practices such as Automated Essay Scoring, which has already been attempted in four MOOCs (319). Designing a valid assessment, the authors state, will allow composition scholars to determine how MOOCs affect pedagogy and learning (320) and from those findings to understand more fully what MOOCs can accomplish across diverse populations and settings (321).

Comer and White stress that assessment processes extant in traditional composition contexts can contribute to a “hybrid form” applicable to the characteristics of a MOOC (324) such as the “scale” of the project and the “wide heterogeneity of learners” (324). Models for assessment in traditional environments as well as online contexts had to be combined with new approaches that addressed the “lack of direct teacher feedback and evaluation and limited accountability for peer feedback” (324).

For Comer and White, this hybrid approach must accommodate the degree to which the course combined the features of an “xMOOC” governed by a traditional academic course design with those of a “cMOOC,” in which learning occurs across “network[s]” through “connections” largely of the learners’ creation (322-23).

Learning objectives and assignments mirrored those familiar to compositionists, such as the ability to “[a]rgue and support a position” and “[i]dentify and use the stages of the writing process” (323). Students completed four major projects, the first three incorporating drafting, feedback, and revision (324). Instructional videos and optional workshops in Google Hangouts supported assignments like discussion forum participation, informal contributions, self-reflection, and peer feedback (323).

The assessment itself, designed to shed light on how best to assess such contexts, consisted of “peer feedback and evaluation,” “Self-reflection,” three surveys, and “Intensive Portfolio Rating” (325-26).

The course supported both formative and evaluative peer feedback through “highly structured rubrics” and extensive modeling (326). Students who had submitted drafts each received responses from three other students, and those who submitted final drafts received evaluations from four peers on a 1-6 scale (327). The authors argue that despite the level of support peer review requires, it is preferable to more expert-driven or automated responses because they believe that

what student writers need and desire above all else is a respectful reader who will attend to their writing with care and respond to it with understanding of its aims. (327)

They found that the formative review, although taken seriously by many students, was “uneven,” and students varied in their appreciation of the process (327-29). Meanwhile, the authors interpret the evaluative peer review as indicating that “student writing overall was successful” (330). Peer grades closely matched those of the expert graders, and, while marginally higher, were not inappropriately high (330).

The MOOC provided many opportunities for self-reflection, which the authors denote as “one of the richest growth areas” (332). They provide examples of student responses to these opportunities as evidence of committed engagement with the course; a strong desire for improvement; an appreciation of the value of both receiving and giving feedback; and awareness of opportunities for growth (332-35). More than 1400 students turned in “final reflective essays” (335).

Self-efficacy measures revealed that students exhibited an unexpectedly high level of confidence in many areas, such as “their abilities to draft, revise, edit, read critically, and summarize” (337). Somewhat lower confidence levels in their ability to give and receive feedback persuade the authors that a MOOC emphasizing peer interaction served as an “occasion to hone these skills” (337). The greatest gain occurred in this domain.

Nine “professional writing instructors” (339) assessed portfolios for 247 students who had both completed the course and opted into the IRB component (340). This assessment confirmed that while students might not be able to “rely consistently” on formative peer review, peer evaluation could effectively supplement expert grading (344).

Comer and White stress the importance of further research in a range of areas, including how best to support effective peer response; how ESL writers interact with MOOCs; what kinds of people choose MOOCs and why; and how MOOCs might function in WAC/WID situations (344-45).

The authors stress the importance of avoiding “extreme concluding statements” about the effectiveness of MOOCs based on findings such as theirs (346). Their study suggests that different learners valued the experience differently; those who found it useful did so for varied reasons. Repeating that writing studies must take responsibility for assessment in such contexts, they emphasize that “MOOCs cannot and should not replace face-to-face instruction” (346; emphasis original). However, they contend that even enrollees who interacted briefly with the MOOC left with an exposure to writing practices they would not have gained otherwise and that the students who completed the MOOC satisfactorily amounted to more students than Comer would have reached in 53 years teaching her regular FY sessions (346).

In designing assessments, the authors urge, compositionists should resist the impulse to focus solely on the “Big Data” produced by assessments at such scales (347-48). Such a focus can obscure the importance of individual learners who, they note, “bring their own priorities, objectives, and interests to the writing MOOC” (348). They advocate making assessment an activity for the learners as much as possible through self-reflection and through peer interaction, which, when effectively supported, “is almost as useful to students as expert response and is crucial to student learning” (349). Ultimately, while the MOOC did not succeed universally, it offered many students valuable writing experiences (346).


Bourelle et al. Multimodal in f2f vs. online classes. C&C, Mar. 2016. Posted 01/24/2016.

Bourelle, Andrew, Tiffany Bourelle, Anna V. Knutson, and Stephanie Spong. “Sites of Multimodal Literacy: Comparing Student Learning in Online and Face-to-Face Environments.” Computers and Composition 39 (2015): 55-70. Web. 14 Jan. 2016.

Andrew Bourelle, Tiffany Bourelle, Anna V. Knutson, and Stephanie Spong report on a “small pilot study” at the University of New Mexico that compares how “multimodal liteacies” are taught in online and face-to-face (f2f) composition classes (55-56). Rather than arguing for the superiority of a particular environment, the writers contend, they hope to “understand the differences” and “generate a conversation regarding what instructors of a f2f classroom can learn from the online environment, especially when adopting a multimodal curriculum” (55). The authors find that while differences in overall learning measures were slight, with a small advantage to the online classes, online students demonstrated considerably more success in the multimodal component featured in both kinds of classes (60).

They examined student learning in two online sections and one f2f section teaching a “functionally parallel” multimodal curriculum (58). The online courses were part of eComp, an online initiative at the University of New Mexico based on the Writers’ Studio program at Arizona State University, which two of the current authors had helped to develop (57). Features derived from the Writers’ Studio included the assignment of three projects to be submitted in an electronic portfolio as well as a reflective component in which the students explicated their own learning. Additionally, the eComp classes “embedded” instructional assistants (IAs): graduate teaching assistants and undergraduate tutors (57-58). Students received formative peer review and feedback from both the instructor and the IAs. (57-58).

Students created multimodal responses to the three assignments—a review, a commentary, and a proposal. The multimodal components “often supplemented, rather than replaced, the written portion of the assignment” (58). Students analyzed examples from other classes and from public media through online discussions, focusing on such issues as “the unique features of each medium” and “the design features that either enhanced or stymied” a project’s rhetorical intent (58). Bourelle et al. emphasize the importance of foregrounding “rhetorical concepts” rather than the mechanics of electronic presentation (57).

The f2f class, taught by one of the authors who was also teaching one of the eComp classes, used the same materials, but the online discussion and analysis were replaced by in-class instruction and interaction, and the students received instructor and peer feedback (58). Students could consult the IAs in the campus writing center and seek other feedback via the center’s online tutorials (58).

The authors present their assessment as both quantitative, through holistic scores using a rubric that they present in an Appendix, and qualitative, through consideration of the students’ reflection on their experiences (57). The importance of including a number of different genres in the eportfolios created by both kinds of classes required specific norming on portfolio assessment for the five assessment readers (58-59). Four of the readers were instructors or tutors in the pilot, with the fifth assigned so that instructors would not be assessing their own students’ work (58). Third reads reconciled disparate scores. The readers examined all of the f2f portfolios and 21, or 50%, of the online submissions. Bourelle et al. provide statistical data to argue that this 50% sample adequately supports their conclusions at a “confidence level of 80%” (59).

The rubric assessed features such as

organization of contents (a logical progression), the overall focus (thesis), development (the unique features of the medium and how well the modes worked together), format and design (overall design aesthetics . . . ), and mechanics. . . . (60)

Students’ learning about multimodal production was assessed through the reflective component (60). The substantial difference in this score led to a considerable difference in the total scores (61).

The authors provide specific examples of work done by an f2f student and by an online student to illustrate the distinctions they felt characterized the two groups. They argue that students in the f2f classes as a group had difficulties “mak[ing] choices in design according to the needs of the audience” (61). Similarly, in the reflective component, f2f students had more trouble explaining “their choice of medium and how the choice would best communicate their message to the chosen audience” (61).

In contrast, the researchers state that the student representing the online cohort exhibits “audience awareness with the choice of her medium and the content included within” (62). Such awareness, the authors write, carried through all three projects, growing in sophistication (62-63). Based on both her work and her reflection, this student seemed to recognize what each medium offered and to make reasoned choices for effect. The authors present one student from the f2f class who demonstrated similar learning, but argue that, on the whole, the f2f work and reflections revealed less efficacy with multimodal projects (63).

Bourelle et al. do not feel that self-selection for more comfort with technology affected the results because survey data indicated that “life circumstances” rather than attitudes toward technology governed students’ choice of online sections (64). They indicate, in contrast, that the presence of the IAs may have had a substantive effect (64).

They also discuss the “archival” nature of an online environment, in which prior discussion and drafts remained available for students to “revisit,” with the result that the reflections were more extensive. Such reflective depth, Claire Lauer suggests, leads to “more rhetorically effective multimodal projects” (cited in Bourelle et al. 65).

Finally, they posit an interaction between what Rich Halverson and R. Benjamin Shapiro designate “technologies for learners” and “technologies for education.” The latter refer to the tools used to structure classrooms, while the former include specific tools and activities “designed to support the needs, goals, and styles of individuals” (qtd. in Bourelle et al. 65). The authors posit that when the individual tools students use are in fact the same as the “technologies for education,” students engage more fully with multimodality in such an immersive multimodal environment.

This interaction, the authors suggest, is especially important because of the need to address the caveat from research and the document CCCC Online Writing Instruction, 2013, that online courses should prioritize writing and rhetorical concepts, not the technology itself (65). The authors note that online students appeared to spontaneously select more advanced technology than the f2f students, choices that Daniel Anderson argues inherently lead to more “enhanced critical thinking” and higher motivation (66).

The authors argue that their research supports two recommendations: first, the inclusion of IAs for multimodal learning; and second, the adoption by f2f instructors of multimodal activities and presentations, such as online discussion, videoed instruction, tutorials, and multiple examples. Face-to-face instructors, in this view, should try to emulate more nearly the “archival and nonlinear nature of the online course” (66). The authors call for further exploration of their contention that “student learning is indeed different within online and f2f multimodal courses,” based on their findings at the University of New Mexico (67).