College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Sills, Ellery. Creating “Outcomes 3.0.” CCC, Sept. 2018. Posted 10/24/2018.

Sills, Ellery. “Making Composing Policy Audible: A Genealogy of the WPA Outcomes Statement 3.0.” College Composition and Communication 70.1 (2018): 57-81. Print.

Ellery Sills provides a “genealogy” of the deliberations involved in the development of “Outcomes 3.0,” the third revision of the Council of Writing Program Administrators’ Outcome Statement for First-Year Composition (58). His starting point is “Revising FYC Outcomes for a Multimodal, Digitally Composed World,” a 2014 article by six of the ten composition faculty who served on the task force to develop Outcomes (OS) 3.0 (57).

Sills considers the 2014 article a “perfectly respectable history” of the document (58), but argues that such histories do not capture the “multivocality” of any policymaking process (59). He draws on Chris Gallagher to contend that official documents like the three Outcomes Statements present a finished product that erases debates and disagreements that go into policy recommendations (59). Sills cites Michel Foucault’s view that, in contrast, a genealogy replaces “the monotonous finality” (qtd. in Sills 59) of a history by “excavat[ing] the ambiguities” that characterized the deliberative process (59).

For Sills, Outcomes 3.0 shares with previous versions of the Outcomes Statement the risk that it will be seen as “hegemonic” and that its status as an official document will constrain teachers and programs from using it to experiment and innovate (75-76). He argues that sharing the various contentions that arose as the document was developed can enhance its ability to function as, in the words of Susan Leigh Star, a document of “cooperation without consensus” (qtd. in Sills 73) that does not preclude interpretations that may not align with a perceived status quo (76). Rather, in Sill’s view, revealing the different voices involved in its production permits Outcomes 3.0 to be understood as a “boundary object,” that is, an object that is

strictly defined within a particular community of practice, but loosely defined across different communities of practice. . . . [and that] allows certain terms and concepts . . . to encompass many different things. (74)

He believes that “[k]eeping policy deliberations audible” (76) will encourage instructors and programs to interpret the document’s positions flexibly as they come to see how many different approaches were brought to bear in generating the final text.

Sills invited all ten task members to participate in “discourse-based” interviews. Five agreed: Dylan Dryer, Susanmarie Harrington, Bump Halbritter, Beth Brunk-Chavez, and Kathleen Blake Yancey (60-61). Discussion focused on deliberations around the terms “composing, technology, and genre” (61; emphasis original).

Sills’s discussion of the deliberations around “composing” focus on the shift from “writing” as a key term to a less restrictive term that could encompass many different ways in which people communicate today (61). Sills indicates that the original Outcomes Statement (1.0) of 2000 made digital practices a “residual category” in comparison to traditional print-based works, while the 3.0 task force worked toward a document that endorsed both print and multimodal practices without privileging either (63).

Ideally, in the interviewees’ views, curricula in keeping with Outcomes 3.0 recognizes composing’s “complexity,” regardless of the technologies involved (65). At the same time, in Sills’s analysis, the multiplicity of practices incorporated under composing found common ground in the view, in Dryer’s words, that “we teach writing, we’re bunch of writers” (qtd. in Sills 65).

Sills states that the “ambiguity” of terms like “composing” served not only to open the door to many forms of communicative practice but also to respond to the “kairotic” demands of a document like Outcomes. 3.0. Interviewees worried that naming specific composing practices would result in guidelines that quickly fell out of date as composing options evolved (64).

According to Sills, interviews about the deliberations over genre revealed more varied attitudes than those about composing (66). In general, the responses Sills records suggest a movement away from seeing genre as fixed “static form[s]” (67) calling for a particular format toward recognizing genres as fluid, flexible, and responsive to rhetorical situations. Sills quotes Dryer’s claim that the new document depicts “students and readers and writers” as “much more agentive”; “genres change and . . . readers and writers participate in that change” (qtd. in Sills 67). Halbritter emphasizes a shift from “knowledge about” forms to a process of “experiential learning” as central to the new statement’s approach (68). For Harrington, the presentation of genre in the new document reflects attention to “habits of mind” such as rhetorical awareness and “taking responsibility for making choices” (qtd. in Sills 69).

Brunk-Chavez’s interview addresses the degree to which, in the earlier statements, technology was handled as a distinct element when genre was still equated primarily with textual forms. In the new document, whatever technology is being used is seen as integral to the genre being produced (69). Moreover, she notes that OS 3.0’s handling of genre opens it to types of writing done across disciplines (70).

She joins Yancy, however, in noting the need for the document to reflect “the consensus of the field” (72). While there was some question as to whether genre as a literary or rhetorical term should even be included in the original OS, Yancy argues that the term’s “time has come” (71). Yet the interviews capture a sense that not every practitioner in composition shares a common understanding of the term and that the document should still be applicable, for example, to instructors for whom “genre” still equates with modes (71).

In addressing this variation in the term’s function in practice, Sills notes Yancey’s desire for OS 3.0 to be a “bridging document” that does not “move too far ahead of where the discipline is,” linking scholarly exploration of genre with the many ways practitioners understand and use the term (72).

Sills considers challenges that the OS 3.0 must address if it is to serve the diverse and evolving needs of the field. Responding to concerns of scholars like Jeff Rice that the document imposes an ultimately conservative “ideology of generality” that amounts to a “rejection of the unusual” (qtd. in Sills 75), Sills acknowledges that the authority of the statement may prevent “subordinate communities of practice” like contingent faculty from “messing around with” its recommendations. But he contends that the task force’s determination to produce flexible guidelines and to foster ongoing revision can encourage “healthy resistance” to possible hegemony (76).

He further recommends specific efforts to expand participation, such as creating a Special Interest Group or a “standing institutional body” like an Outcomes Collective with rotating membership from which future task forces can be recruited on a regular timetable. Such ongoing input, he contends, can both invite diversity as teachers join the conversation more widely and assure the kairotic validity of future statements in the changing field (77-78).


Schiavone, Aubrey. Consumption vs. Production in Multimodal Textbooks. March CE. Posted 03/24/2017.

Schiavone, Aubrey. “Consumption, Production, and Rhetorical Knowledge in Visual and Multimodal Textbooks.” College English 79.4 (2017): 358-80. Print.

Aubrey Schiavone presents a study of four textbooks designed to support composition’s “multimodal turn” (359). In her view, these textbooks, published in the past fifteen years, can be positioned as “mainstream textbooks” likely to be used by a range of teachers, including teachers new to composition, in designing a class with multimodal components (363). Schiavone presents statistics on citation and sales to support her choice of these books (380).

Schiavone draws on the work of scholars like Robert J. Connors and A. Abby Knoblauch to argue that textbooks influence teachers’ decisions about what kinds of assignments are appropriate in writing classrooms (377). Thus, she argues for “mindful” attention to the particular messages embedded in textbooks about how best to teach activities such as multimodal composition (376). Her analysis suggests that an unself-conscious use of textbook assignments can limit the degree to which classroom practice accords with theories about the nature of multimodality and how students can best learn to respond to and use multimodal artifacts (371).

The books in her study are Picturing Texts (Lester Faigley, Diana George, Anna Palchik, and Cynthia Self, 2004); Rhetorical Visions: Reading and Writing in a Visual Culture (Wendy S. Hesford and Brenda Jo Brueggemann, 2007); Seeing & Writing 4 (Donald and Christine McQuade, 2010); and Beyond Words: Cultural Texts for Reading and Writing (John J. Ruskiewicz, Daniel Anderson, and Christy Friend, 2006) (362).*

Developing her “[t]heoretical [f]ramework” (363), Schiavone cites a number of scholars including Diana George, Lester Faigley, and Steve Westbrook to propose that the use of multimodal elements can function in different relations to text. A “binary” relationship is one in which students are encouraged to examine, or “consume” a visual or multimodal artifact and then produce a separate written text analyzing or responding to the artifact (364).

In a “linear” relationship, illustrated by assignments discussed by Westbrook, students examine products in one mode and then convert them to other modes, for example creating images to capture the meaning of a previously produced essay; in this kind of relationship, in Schiavone’s words, “students’ consumption of visual and multimodal artifacts functions as a kind of scaffolding up to their production of such texts” (365; emphasis original).

Finally, Schiavone identifies a “reciprocal” relationship, which “imagines consumption and production as necessarily interconnected” and, in her view, citing Faigley, encourages students to engage in more meaningful critical awareness of rhetorical processes as they produce their own multimodal artifacts (366).

Schiavone also investigates theoretical definitions of “visual” as opposed to “multimodal” artifacts. In her discussion, a “visual” artifact will be “monomodal” if students are encouraged only to examine an image, whereas artifacts that combine the visual with the textual (e.g., maps) or with other modes such a auditory elements can be more correctly identified as “multimodal.” Schiavone contends that the terms “visual” and “multimodal” have been “conflat[ed]” in some scholarship and that this distinction should be made more consistently (366-67).

In her analysis, Schiavone is concerned with the difference between “consumption” and “production” of various kinds of artifacts. Through her examination of “every assignment prompt across four textbooks, a total of 1, 629 prompts” (371), Schiavone developed codes for “consumption” of textual, visual, or multimodal artifacts (i.e., CT, CV, or CMM) and for “production” of these artifacts (PT, PV, PMM) (369). She provides examples of each kind of code: a prompt receiving a code of CV, for example, might ask students to “examine [the] image carefully until you are reasonably confident that you understand and appreciate how it works. . . ,” while one receiving a PV code might require students to “create a visual adaptation” of another artifact (375, 369; examples in Schiavone’s analysis are taken from McQuade and McQuade, Seeing & Writing).

She notes that some prompts can receive more than one code, for example calling for the consumption of a multimodal artifact and then the production of a textual response (370). She argues that such combinations of codes can either reinforce a binary approach by separating the activities involved in “reductive” ways (374), or they can encourage a more complex understanding of how multimodal composition can work. However, she states, “complexity is not the norm,” with 49% of the prompts receiving only one code and 33% receiving only two (374).

Her findings indicate a “misalignment” between theoretical approaches that advocate more production of multimodal projects in writing classrooms and what the four textbooks appear to promote (373). One result is that the textbooks call for much more production of text than of either visual or multimodal artifacts (372). She detects a pattern in which prompts receiving “linked codes” required students to consume a visual or multimodal item, then produce an essay about the item (374-75). She argues that this pattern perpetuates binary or linear approaches to multimodal instruction.

Her analysis further indicates variation across the textbooks, with Picturing Texts calling for a higher percentage of production, particularly of visual or multimodal items (PT = 28%, PV = 6%, PMM = 25%) than the four books as a whole (PT = 36%, PV = 2%, PMM = 11%) (373).

Schiavone concludes that both individual instructors and compositionists engaged in teacher-training must “be mindful about their uptake of textbook assignment prompts” (376). This caution, she suggests, is especially important when instructors are not necessarily specialists in rhetoric and composition (376). Theory and guidance from sources such as the WPA Outcomes Statement should be more visible in the texts and in the development of instructors (376-77, 378). Textbooks should be seen as “teaching tools rather than full teaching plans” in composition classrooms (377).

Schiavone also notes that the textbooks provided far more prompts than could conceivably be used in any single course, and suggests that the authors could more fruitfully “pay better attention to assignment sequencing” than to quantity of materials (377).

Ideally, in her view, such “mindfulness” should lead to multimodal pedagogies that are “theoretically grounded and rhetorically rich” (378).

*Online searches suggest that some of these texts have subsequently appeared in later editions or with different titles, and some are out of print.


Wooten et al. SETs in Writing Classes. WPA, Fall 2016. Posted 02/11/2016.

Wooten, Courtney Adams, Brian Ray, and Jacob Babb. “WPAs Reading SETs: Toward an Ethical and Effective Use of Teaching Evaluations.” Journal of the Council of Writing Program Administrators 40.1 (2016): 50-66. Print.

Courtney Adams Wooten, Brian Ray, and Jacob Babb report on a survey examining the use of Student Evaluations of Teaching (SETs) by writing program administrators (WPAs).

According to Wooten et al., although WPAs appear to be dissatisfied with the way SETs are generally used and have often attempted to modify the form and implementation of these tools for evaluating teaching, they have done so without the benefit of a robust professional conversation on the issue (50). Noting that much of the research they found on the topic came from areas outside of writing studies (63), the authors cite a single collection on using SETs in writing programs by Amy Dayton that recommends using SETs formatively and as one of several measures to assess teaching. Beyond this source, they cite “the absence of research on SETs in our discipline” as grounds for the more extensive study they conducted (51).

The authors generated a list of WPA contact information at more than 270 institutions, ranging from two-year colleges to private and parochial schools to flagship public universities, and solicited participation via listservs and emails to WPAs (51). Sixty-two institutions responded in summer 2014 for a response rate of 23%; 90% of the responding institutions were four-year institutions.

Despite this low response rate, the authors found the data informative (52). They note that the difficulty in recruiting faculty responses from two-year colleges may have resulted from problems in identifying responsible WPAs in programs where no specific individual directed a designated writing program (52).

Their survey, which they provide, asked demographic and logistical questions to establish current practice regarding SETs at the responding institutions as well as questions intended to elicit WPAs’ attitudes toward the ways SETs affected their programs (52). Open-ended questions allowed elaboration on Likert-scale queries (52).

An important recurring theme in the responses involved the kinds of authority WPAs could assert over the type and use of SETs at their schools. Responses indicated that the degree to which WPAs could access student responses and could use them to make hiring decisions varied greatly. Although 76% of the WPAs could read SETS, a similar number indicated that department chairs and other administrators also examined the student responses (53). For example, in one case, the director of a first-year-experience program took primary charge of the evaluations (53). The authors note that WPAs are held accountable for student outcomes but, in many cases, cannot make personnel decisions affecting these outcomes (54).

Wooten et al. report other tensions revolving around WPAs’ authority over tenured and tenure-track faculty; in these cases, surveyed WPAs often noted that they could not influence either curricula nor course assignments for such faculty (54). Many WPAs saw their role as “mentoring” rather than “hiring/firing.” The WPAs were obliged to respond to requests from external authorities to deal with poor SETs (54); the authors note a “tacit assumption . . . that the WPA is not capable of interpreting SET data, only carrying out the will of the university” (54). They argue that “struggles over departmental governance and authority” deprive WPAs of the “decision-making power” necessary to do the work required of them (55).

The survey “revealed widespread dissatisfaction” about the ways in which SETs were administered and used (56). Only 13% reported implementing a form specific to writing; more commonly, writing programs used “generic” forms that asked broad questions about the teacher’s apparent preparation, use of materials, and expertise (56). The authors contend that these “indirect” measures do not ask about practices specific to writing and may elicit negative comments from students who do not understand what kinds of activities writing professionals consider most beneficial (56).

Other issues of concern include the use of online evaluations, which provide data that can be easily analyzed but result in lower participation rates (57). Moreover, the authors note, WPAs often distrust numerical data without the context provided by narrative responses, to which they may or may not have access (58).

Respondents also noted confusion or uncertainty about how an institution determines what constitutes a “good” or “poor” score. Many of these decisions are determined by comparing an individual teacher’s score to a departmental or university-wide average, with scores below the average signaling the need for intervention. The authors found evidence that even WPAs may fail to recognize that lower scores can be influenced not just by the grade the student expects but also by gender, ethnicity, and age, as well as whether the course is required (58-59).

Wooten et al. distinguish between “teaching effectiveness,” a basic measure of competence, and “teaching excellence,” practices and outcomes that can serve as benchmarks for other educators (60). They note that at many institutions, SETs appear to have little influence over recognition of excellence, for example through awards or commendations; classroom observations and teaching portfolios appear to be used more often for these determinations. SETs, in contrast, appear to have a more “punitive” function (61), used more often to single out teachers who purportedly fall short in effectiveness (60).

The authors note the vulnerability of contingent and non-tenure-track faculty to poorly implemented SETs and argue that a climate of fear occasioned by such practices can lead to “lenient grading and lowered demands” (61). They urge WPAs to consider the ethical implications of the use of SETs in their institutions.

Recommendations include “ensuring high response rates” through procedures and incentives; clarifying and standardizing designations of good and poor performance and ensuring transparency in the procedures for addressing low scores; and developing forms specific to local conditions and programs (61-62). Several of the recommendations concern increasing WPA authority over hiring and mentoring teachers, including tenure-track and tenured faculty. Wooten et al. recommend that all teachers assigned to writing courses administer writing-specific evaluations and be required to act on the information these forms provide; the annual-report process can allow tenured faculty to demonstrate their responsiveness (62).

The authors hope that these recommendations will lead to a ‘disciplinary discussion” among WPAs that will guide “the creation of locally appropriate evaluation forms that balance the needs of all stakeholders—students, teachers, and administrators” (63).


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).


1 Comment

Boyle, Casey. Rhetoric and/as Posthuman Practice. CE, July 2016. Posted 08/06/2016.

Boyle, Casey. “Writing and Rhetoric and/as Posthuman Practice.” College English 78.6 (2016): 532-54. Print.

Casey Boyle examines the Framework for Success in Postsecondary Writing, issued by the Council of Writing Program Administrators, the National Council of Teachers of English, and the National Writing Project, in light of its recommendation that writing instruction encourage the development of “habits of mind” that result in enhanced learning.

Boyle focuses especially on the Framework‘s attention to “metacognition,” which he finds to be largely related to “reflection” (533). In Boyle’s view, when writing studies locates reflection at the center of writing pedagogy, as he argues it does, the field endorses a set of “bad habits” that he relates to a humanist mindset (533). Boyle proposes instead a view of writing and writing pedagogy that is “ecological” and “posthuman” (538). Taking up Kristine Johnson’s claim that the Framework opens the door to a revitalization of “ancient rhetorical training.” Boyle challenges the equation of such training with a central mission of social and political critique (534).

Boyle recounts a history of writing pedagogy beginning with “current-traditional rhetoric” as described by Sharon Crowley and others as the repetitive practice of form (535). Rejection of this pedagogy resulted in a shift toward rhetorical and writing education as a means of engaging students with their social and political surroundings. Boyle terms this focus “current-critical rhetoric” (536). Its primary aim, he argues, is to increase an individual’s agency in that person’s dealings with his or her cultural milieu, enhancing the individual’s role as a citizen in a democratic polity (536).

Boyle critiques current-critical rhetoric, both in its approach to the self and in its insistence on the importance of reflection as a route to critical awareness, for its determination to value the individual’s agency over the object, which is viewed as separate from the acting self (547). Boyle cites Peter Sloterdijk’s view that the humanist sense of a writing self manifests itself in the “epistle or the letter to a friend” that demonstrates the existence of a coherent identity represented by the text (537). Boyle further locates a humanist approach in the “reflective letter assignments” that ask students to demonstrate their individual agency in choosing among many options as they engage in rhetorical situations (537).

To develop the concept of the “ecological orientation” (538) that is consistent with a posthumanist mindset, Boyle explores a range of iterations of posthumanism, which he stresses is not be understood as “after the human” (539). Rather, quoting N. Katherine Hayles, Boyle characterizes posthumanism as “the end of a certain conception of the human” (qtd. in Boyle 539). Central posthumanism is the idea of human practices as one component of a “mangled assemblage” of interactions among both human and nonhuman entities (541) in which separation of subject and object become impossible. In this view, “rhetorical training” would become “an orchestration of ecological relations” (539), in which practices within a complex of technologies and environments, some of them not consciously summoned, would emerge from the relations and shape future practices and relations.

Boyle characterizes this understanding of practice as a relation of “betweenness among what was previously considered the human and the nonhuman” (540; emphasis in original). He applies Andrew Pickering’s metaphor of practice as a “reciprocal tuning of people and things” (541). In such an orientation, “[t]heory is a practice” that “is continuous with and not separate from the mediation of material ecologies” (542). Practice becomes an “ongoing tuning” (542) that functions as a “way of becoming” (Robert Yagelski, qtd. in Boyle 538; emphasis in original).

In Boyle’s view, the Framework points toward this ecological orientation in stressing the habit of “openness” to “new ways of being” (qtd. in Boyle 541). In addition, the Framework envisions students “writing in multiple environments” (543; emphasis in Boyle). Seen in a posthuman light, such multiple exposures redirect writers from the development of critical awareness to, in Pickering’s formulation, knowledge understood as a “sensitivity” to the interactions of ecological components in which actors both human and nonhuman are reciprocally generative of new forms and understandings (542). Quoting Isabelle Stengers, Boyle argues that “an ecology of practices does not have any ambition to describe things ‘as they are’ . . . but as they may become” (qtd. in Boyle 541).

In Boyle’s formulation, agency becomes “capacity,” which is developed through repeated practice that then “accumulates prior experience” to construct a “database of experience” that establishes the habits we draw on to engage productively with future environments (545). Such an accumulation comes to encompass, in the words of Collin Brooke, “all of the ‘available means'” (qtd. in Boyle 549), not all of them visible to conscious reflection, (544) through which we can affect and be affected by ongoing relations in rhetorical situations.

Boyle embodies such practice in the figure of the archivist “whose chief task is to generate an abundance of relations” rather than that of the letter writer (550), thus expanding options for being in the world. Boyle emphasizes that the use of practice in this way is “serial” in that each reiteration is both “continuous” and “distinct,” with the components of the series “a part of, but also apart from, any linear logic that might be imposed” (547): “Practice is the repetitive production of difference” (547). Practice also becomes an ethics that does not seek to impose moral strictures (548) but rather to enlarge and enable “perception” and “sensitivities” (546) that coalesce, in the words of Rosi Braidotti, in a “pragmatic task of self-transformation through humble experimentation” (qtd. in Boyle 539).

Boyle connects these endeavors to rhetoric’s historical allegiance to repetition through sharing “common notions” (Giles Deleuze, qtd. in Boyle 550). Persuasion, he writes, “occurs . . . not as much through rational appeals to claims but through an exercise of material and discursive forms” (550), that is, through relations enlarged by habits of practice.

Related to this departure from conscious rational analysis is Boyle’s proposed posthuman recuperation of “metacognition,” which he states has generally been perceived to involve analysis from a “distance or remove from an object to which one looks” (551). In Boyle’s view, metacognition can be understood more productively through a secondary meaning that connotes “after” and “among” (551). Similarly, rhetoric operates not in the particular perception arising from a situated moments but “in between” the individual moment and the sensitivities acquired from experience in a broader context (550; emphasis original):

[R]hetoric, by attending more closely to practice and its nonconscious and nonreflective activity, reframes itself by considering its operations as exercises within a more expansive body of relations than can be reduced to any individual human. (552).

Such a sensibility, for Boyle, should refigure writing instruction, transforming it into “a practice that enacts a self” (537) in an ecological relation to that self’s world.

 


2 Comments

Shepherd, Ryan P. Facebook, Gender, and Compositon. C&C, Mar. 2016. Posted 03/06/2016.

Shepherd, Ryan P. “Men, Women, and Web 2.0 Writing: Gender Difference in Facebook Composing.” Computers and Composition 39 (2016): 14-26. Web. 22 Feb. 2016.

Ryan P. Shepherd discusses a study to investigate how gender differences affect the use of Web 2.0 platforms, specifically Facebook, as these differences relate to composition classes. He argues that, although a great deal of work has been done within composition studies to explore how gender manifests in writing classes, and much work has documented gender differences in online activities in fields such as psychology, education, and advertising (16), the ways in which gender differences in Web 2.0 affect students’ approaches to composition have not been adequately addressed by the field (14).

Shepherd notes that discussions of gender differences risk essentializing male and female populations, but cites research by Cynthia L. Selfe and Gail E. Hawisher as well as Nancy K. Baym to contend that evidence for different behaviors does “persist” across studies and should be considered as composition teachers incorporate digital practices into classrooms (15). Without attention to the ways online composing relates to “aspects of identity and how these aspects shape composing practices when integrating social network sites (SNSs) into FYC [first-year composition] classes” (15), composition teachers may miss opportunities to fully exploit Web 2.0 as a literacy experience and meet student needs (15, 24).

The data come from a survey of FYC students about their Facebook activities and attitudes toward Facebook as a composing platform. Developed through multiple pilots over the course of the 2011 academic year, the survey gathered 474 responses, mostly from freshmen enrolled in some form of FYC at Shepherd’s institution and at other “large, doctoral-granting institutions” from which Shepherd solicited participation via the Council of Writing Program Administrators’ listserv (17). The survey is available as a supplemental appendix.

Shepherd argues that Facebook is an appropriate site to study because of its widespread use by college students and its incorporation of “a number of literacy practices,’ in particular what the 2004 CCCC Position Statement on digital writing calls “the literacy of the screen” (15). Shepherd first explores discussions of Facebook as it has been recommended for and incorporated into writing classes since 2008 as well as studies of student use of the platform (16). He then considers comprehensive work outside of composition on gender differences in the use of Facebook and other SNSs.

These studies vary in their results, with some showing that men and women do not differ in the amount of time they spend on SNSs and others showing that women do spend more time (17). Some studies find that women use such sites for more personal uses like email, compared to the finding that men are more likely to “surf” (17). Women in some parts of this body of research appear to engage more in “family activity,” to provide “more personal information in the ‘about me'” areas, and to worry more about privacy (17). Shepherd discusses one article about student use of Facebook that reveals that women use varied media more often; the article expresses concern about student comfort with online spaces and urges careful scaffolding in incorporating such spaces into classwork (17).

Shepherd presents his findings in a series of tables that reveal that gender had “a more statistically significant effect on more questions and often with more significant differences than any other independent variable” (18). The tables focus on the aspects in which these differences were evident.

In Shepherd’s view, gender difference significantly affected participants’ “rhetorical purposes,” their “different view[s] of audience,” and their varying “rhetorical stance[s]” (21). In general, he states that the data suggest that women are more concerned with “communicating with a broad audience,” while men appear more likely to see Facebook as a way to engage in “direct, personal communication” (22). Evidence for this conclusion comes from such data as the degree to which women and men invested equally in comments and chat, but women were more likely to post status updates, which Shepherd suggests may be a type of “announcement . . . to a large group of people at one time” (22). Women are also more likely to visit friends’ pages. Shepherd’s data also indicates that women think more carefully about their posts and “were more mindful” about the effects of photos and other media, even to the point that they might be thinking in terms of visual arguments (22). Shepherd believes these findings accord with conclusions drawn by Linda A. Jackson, Kevin S. Ervin, Philip D. Gardner, and N. Schmitt in the journal Sex Roles, where they suggest that women are more “interpersonally oriented” while men are more “information/task oriented” (qtd. in Shepherd 23).

In general, women were “more aware of audience on Facebook” (23). Shepherd cites their tendency to consider their privacy settings more often; he proposes that women’s tendency to post more personal information may account for some part of their concern with privacy (23). Moreover, he found that women were more likely to be aware that employers could access information on Facebook. In short, it may be that women “tend to have a greater awareness of people beyond the immediate audience of Facebook friends than men do” (23).

Shepherd sees differences in “rhetorical stance” manifested in the ways that men and women characterize Facebook as a location for writing. In this case, men were more likely to see the platform as a site for serious, “formal” writing and argument (23). The data suggest that men saw many different types of Facebook activities, such as posting media, as “a type of composition” (23). Shepherd posits that because women tend to do more multimodal posting, they may be less likely to think of their Facebook activities as writing or composition (23). He urges more investigation into this disparity (24).

Gender is just one of the differences that Shepherd contends should be taken into account when incorporating Web 2.0 into writing classrooms. His study reveals variation across “age, year in university, language, and attitude toward writing” (24). He suggests that women’s tendency to reflect more on their writing on Facebook can be helpful in course work where reflection on writing is called for (22); similarly, women’s use of multiple forms of media can be leveraged into discussions of visual rhetoric (22). In particular, he writes, students “may not be aware of the rhetorical choices they are making in their Facebook use and how these choices relate to the audience that they have crafted” (24).

Attention to gender, he contends, is an important part of making exploration of such choices and their effects a productive literacy experience when Facebook and other SNSs become part of a composition class (24).


Obermark et al. New TA Development Model. WPA, Fall 2015. Posted 02/08/2016.

Obermark, Lauren, Elizabeth Brewer, and Kay Halasek. “Moving from the One and Done to a Culture of Collaboration: Revising Professional Development for TAs.” Journal of the Council of Writing Program Administrators 39.1 (2015): 32-53. Print.

Lauren Obermark, Elizabeth Brewer, and Kay Halasek detail a professional development model for graduate teaching assistants (TAs) that was established at their institution to better meet the needs of both beginning and continuing TAs. Their model responded to the call from E. Shelley Reid, Heidi Estrem, and Marcia Belcheir to “[g]o gather data—not just impressions—from your own TAs” in order to understand and foreground local conditions (qtd. in Obermark et al. 33).

To examine and revise their professional development process beginning in 2011 and continuing through 2013, Obermark et al. conducted a survey of current TAs, held focus groups, and surveyed “alumni” TAs to determine TAs’ needs and their reactions to the support provided by the program (35-36).

An exigency for Obermark et al. was the tendency they found in the literature to concentrate TA training on the first semester of teaching. They cite Beth Brunk-Chavez to note that this tendency gives short shrift to the continuing concerns and professional growth of TAs as they advance from their early experiences in first-year writing to more complex teaching assignments (33). As a result of their research, Obermark et al. advocate for professional development that is “collaborative,” “ongoing,” and “distributed across departmental and institutional locations” (34).

The TA program in place at the authors’ institution prior to the assessment included a week-long orientation, a semester’s teaching practicum, a WPA class observation, and a syllabus built around a required textbook (34). After their first-year, TAs were able to move on to other classes, particularly the advanced writing class, which fulfills a general education requirement across the university and is expected to provide a more challenging writing experience, including a “scaffolded research project” (35). Obermark et al. found that while students with broader teaching backgrounds were often comfortable with designing their own syllabus to meet more complex pedagogical requirements, many TAs who had moved from the well-supported first-year course to the second wished for more guidance than they had received (35).

Consulting further scholarship by Estrem and Reid led Obermark et al. to act on “a common error” in professional development: failing to conduct a “needs assessment” by directly asking questions designed to determine, in the words of Kathleen Blake Yancey, “the characteristics of the TAs for whom the program is designed” (qtd. in Obermark et al. 36-37). The use of interview methodology through focus groups not only instilled a collaborative ethos, it also permitted the authors to plan “developmentally appropriate PD” and provided TAs with what the authors see as a rare opportunity to reflect on their experiences as teachers. Obermark et al. stress that this fresh focus on what Cynthia Selfe and Gail Hawisher call a “participatory model of research” (37) allowed the researchers to demonstrate their perceptions of the TAs as professional colleagues, leading the TAs themselves “to identify more readily as professionals” (37).

TAs’ sense of themselves as professionals was further strengthened by the provision of “ongoing” support to move beyond what Obermark et al. call “the one and done” model (39). Through the university teaching center, they encountered Jody Nyquist and Jo Sprague’s theory of three stages of TA development: “senior learners” who “still identify strongly with students”; “colleagues in training” who have begun to recognize themselves as teachers; and “junior colleagues” who have assimilated their professional identities to the point that they “may lack only the formal credentials” (qtd. in Obermark et al. 39). Obermark et al. note that their surveys revealed, as Nyquist and Sprague predicted, that their population comprised TAs at all three levels as they moved through these stages at different rates (39-40).

The researchers learned that even experienced TAs still often had what might have been considered basic questions about the goals of the more advanced course and how to integrate the writing process into the course’s general education outcomes (40). The research revealed that as TAs moved past what Nyquist and Sprague denoted the “survival” mode that tends to characterize a first year of teaching, they began to recognize the value of composition theory and became more invested in applying theory to their teaching (39). That 75% of the alumni surveyed were teaching writing in their institutions regardless of their actual departmental positions reinforced the researchers’ certainty and the TAs’ awareness that composition theory and practice would be central to their ongoing academic careers (40).

Refinements included a more extensive schedule of optional workshops and a “peer-to-peer” program that responded to TA requests for more opportunities to observe and interact with each other. Participating TAs received guidance on effective observation processes and feedback; subsequent expansion of this program offered TAs opportunities to share designing assigning assignments and grading as well (42).

The final component of the new professional-development model focused on expanding the process of TA support across both the English department and the wider university. Obermark et al. indicate that many of the concerns expressed by TAs addressed not just teaching writing with a composition-studies emphasis but also teaching more broadly in areas that “did not fall neatly under our domain as WPAs and specialists in rhetoric and composition” (43). For example, TAs asked for more guidance in working with students’ varied learning styles and, in particular, in meeting the requirement for “social diversity” expressed in the general-education outcomes for the more advance course (44). Some alumni TAs reported wishing for more help teaching in other areas within English, such as in literature courses (45).

The authors designed programs featuring faculty and specialists in different pedagogical areas, such as diversity, as well as workshops and break-outs in which TAs could explore kinds of teaching that would apply across the many different environments in which they found themselves as professionals (45). Obermark et al. note especially the relationship they established with the university teaching center, a collaboration that allowed them to integrate expertise in composition with other philosophies of teaching and that provided “allies in both collecting data and administering workshops for which we needed additional expertise” (45). Two other specific benefits from this partnership were the enhanced “institutional memory” that resulted from inclusion of a wider range of faculty and staff and increased sustainability for the program as a larger university population became invested in the effort (45-46).

Obermark et al. provide their surveys and focus-group questions, urging other WPAs to engage TAs in their own development and to relate to them “as colleagues in the field rather than novices in need of training, inoculation, or the one and done approach” (47).


Anderson et al. Contributions of Writing to Learning. RTE, Nov. 2015. Posted 12/17/2015.

Anderson, Paul, Chris M. Anson, Robert M. Gonyea, and Charles Paine. “The Contributions of Writing to Learning and Development: Results from a Large-Scale, Multi-institutional Study.” Research in the Teaching of English 50.2 (2015): 199-235. Print

Note: The study referenced by this summary was reported in Inside Higher Ed on Dec. 4, 2015. My summary may add some specific details to the earlier article and may clarify some issues raised in the comments on that piece. I invite the authors and others to correct and elaborate on my report.

Paul Anderson, Chris M. Anson, Robert M. Gonyea, and Charles Paine discuss a large-scale study designed to reveal whether writing instruction in college enhances student learning. They note widespread belief both among writing professionals and other stakeholders that including writing in curricula leads to more extensive and deeper learning (200), but contend that the evidence for this improvement is not consistent (201-02).

In their literature review, they report on three large-scale studies that show increased student learning in contexts rich in writing instruction. These studies concluded that the amount of writing in the curriculum improved learning outcomes (201). However, these studies contrast with the varied results from many “small-scale, quasi-experimental studies that examine the impact of specific writing interventions” (200).

Anderson et al. examine attempts to perform meta-analyses across such smaller studies to distill evidence regarding the effects of writing instruction (202). They postulate that these smaller studies often explore such varied practices in so many diverse environments that it is hard to find “comparable studies” from which to draw conclusions; the specificity of the interventions and the student populations to which they are applied make generalization difficult (203).

The researchers designed their investigation to address the disparity among these studies by searching for positive associations between clearly designated best practices in writing instruction and validated measures of student learning. In addition, they wanted to know whether the effects of writing instruction that used these best practices differed from the effects of simply assigning more writing (210). The interventions and practices they tested were developed by the Council of Writing Program Administrators (CWPA), while the learning measures were those used in the National Survey of Student Engagement (NSSE). This collaboration resulted from a feature of the NSSE in which institutions may form consortia to “append questions of specific interest to the group” (206).

Anderson et al. note that an important limitation of the NSSE is its reliance on self-report data, but they contend that “[t]he validity and reliability of the instrument have been extensively tested” (205). Although the institutions sampled were self-selected and women, large institutions, research institutions, and public schools were over-represented, the authors believe that the overall diversity and breadth of the population sampled by the NSSE/CWPA collaboration, encompassing more than 70,000 first-year and senior students, permits generalization that has not been possible with more narrowly targeted studies (204).

The NSSE queries students on how often they have participated in pedagogic activities that can be linked to enhanced learning. These include a wide range of practices such as service-learning, interactive learning, “institutionally challenging work” such as extensive reading and writing; in addition, the survey inquires about campus features such as support services and relationships with faculty as well as students’ perceptions of the degree to which their college experience led to enhanced personal development. The survey also captures demographic information (205-06).

Chosen as dependent variables for the joint CWPA/NSSE study were two NSSE scales:

  • Deep Approaches to Learning, which encompassed three subscales, Higher-Order Learning, Integrative Learning, and Reflective Learning. This scale focused on activities related to analysis, synthesis, evaluation, combination of diverse sources and perspectives, and awareness of one’s own understanding of information (211).
  • Perceived Gains in Learning and Development, which involved subscales of Practical Competence such as enhanced job skills, including the ability to work with others and address “complex real-world problems”; Personal and Social Development, which inquired about students’ growth as independent learners with “a personal code of values and ethics” able to “contribut[e] to the community”; and General Education Learning, which includes the ability to “write and speak clearly and effectively, and to think critically and analytically” (211).

The NSSE also asked students for a quantitative estimate of how much writing they actually did in their coursework (210). These data allowed the researchers to separate the effects of simply assigning more writing from those of employing different kinds of writing instruction.

To test for correlations between pedagogical choices in writing instruction and practices related to enhanced learning as measured by the NSSE scales, the research team developed a “consensus model for effective practices in writing” (206). Eighty CWPA members generated questions that were distilled to 27 divided into “three categories based on related constructs” (206). Twenty-two of these ultimately became part of a module appended to the NSSE that, like the NSSE “Deep Approaches to Learning” scale, asked students how often their coursework had included the specific activities and behaviors in the consensus model. The “three hypothesized constructs for effective writing” (206) were

  • Interactive Writing Processes, such as discussing ideas and drafts with others, including friends and faculty;
  • Meaning-Making Writing Tasks, such as using evidence, applying concepts across domains, or evaluating information and processes; and
  • Clear Writing Expectations, which refers to teacher practices in making clear to students what kind of learning an activity promotes and how student responses will be assessed. (206-07)

They note that no direct measures of student learning is included in the NSSE, nor are such measures included in their study (204). Rather, in both the writing module and the NSSE scale addressing Deep Approaches to Learning, students are asked to report on kinds of assignments, instructor behaviors and practices, and features of their interaction with their institutions, such as whether they used on-campus support services (205-06). The scale on Perceived Gains in Learning and Development asks students to self-assess (211-12).

Despite the lack of specific measures of learning, Anderson et al. argue that the curricular content included in the Deep Approaches to Learning scale does accord with content that has been shown to result in enhanced student learning (211, 231). The researchers argue that comparisons between the NSSE scales and the three writing constructs allow them to detect an association between the effective writing practices and the attitudes toward learning measured by the NSSE.

Anderson et al. provide detailed accounts of their statistical methods. In addition to analysis for goodness-of-fit, they performed “blocked hierarchical regressions” to determine how much of the variance in responses was explained by the kind of writing instruction reported versus other factors, such as demographic differences, participation in various “other engagement variables” such as service-learning and internships, and the actual amount of writing assigned (212). Separate regressions were performed on first-year students and on seniors (221).

Results “suggest[ed] that writing assignments and instructional practices represented by each of our three writing scales were associated with increased participation in Deep Approaches to Learning, although some of that relationship was shared by other forms of engagement” (222). Similarly, the results indicate that “effective writing instruction is associated with more favorable perceptions of learning and development, although other forms of engagement share some of that relationship” (224). In both cases, the amount of writing assigned had “no additional influence” on the variables (222, 223-24).

The researchers provide details of the specific associations among the three writing constructs and the components of the two NSSE scales. Overall, they contend, their data strongly suggest that the three constructs for effective writing instruction can serve “as heuristics that instructors can use when designing writing assignments” (230), both in writing courses and courses in other disciplines. They urge faculty to describe and research other practices that may have similar effects, and they advocate additional forms of research helpful in “refuting, qualifying, supporting, or refining the constructs” (229). They note that, as a result of this study, institutions can now elect to include the module “Experiences with Writing,” which is based on the three constructs, when students take the NSSE (231).

 


Vidali, Amy. Disabling Writing Program Administration. WPA, Sept. 2015. Posted 10/28/2015.

Vidali, Amy. “Disabling Writing Program Administration.” Journal of the Council of Writing Program Administrators 38.2 (2015): 32-55. Print.

Amy Vidali examines the narratives of writing program administrators (WPAs) from the standpoint of disability studies. She argues that the way in which these narratives frame the WPA experience excludes instructive considerations of the intersections between WPA work and disability even though disability functions metaphorically in these texts. Her analysis explores the degree to which “these narratives establish normative expectations of who WPAs are and can be” (33).

Drawing on disability scholars Jay Dolmage and Carrie Sandahl (48n3, 49n4), Vidali proposes “disabling writing program work” (33; emphasis original). Similar to “crip[ping]” an institution or activity, disabling brings to the fore “able-bodied assumptions and exclusionary effects” (Sandahl, qtd. in Vidali 49n4) and tackles the disabling/enabling binary (49). Vidali’s examination of the WPA literature addresses its tendency to privilege ableist notions of success, to exclude access to disabled individuals, and to ignore the insights offered by the lens of disability.

In Vidali’s view, the WPA accounts she extracts from many sources focus on disabilities like depression and anxiety, generally positing that WPA work causes such disabilities and that they are an inevitable part of the WPA landscape that must be managed or “escaped” (37, 39). She uses her own experience with depression to discuss how identifying the mental and physical manifestations of depression solely with the stresses of WPA work impoverishes the field’s understanding of “how anxiety might be produced in the interaction of bodies and environments” (40) which occurs in any complex group configuration; recognition of this interaction removes the responsibility for the disability and its effects from “particular problem bodies” and locates it in the larger set of relationships, including inequities, among people and institutions (42). In other words, for Vidali, acknowledging the existence of disabilities outside of and prior to WPA work and their embodied influence within that work can allow scholars to “reframe WPA narratives in more productive ways” (41).

Vidali writes that the failure to recognize disability as an embodied human state interacting with the WPA environment is exacerbated by the lack of data on the number of WPAs with disabilities and on the kinds of disabilities they bring to the task. Vidali examines surveys in which researchers shied away from asking questions about disability for fear respondents might not feel comfortable answering, especially since revealing disability can lead to discrimination (44, 47).

Particularly damaging, she argues, are narratives often critiqued within the disability-studies community, for example, accounts of “overcoming” the burdens of disability, hero-narratives, and equations between “health” and “success.” Drawing on Paul Longmore and Simi Linton, Vidali writes that narratives of overcoming demand that individuals deal with the difficulties created by their interaction with environments in an effort to accommodate themselves to normal expectations, but these narratives refuse to acknowledge “the power differential” involved and increase the pressure to make do with non-inclusive situations rather than advocate for change (42).

Similarly, in Vidali’s view, hero narratives suggest that only the “hyper-able” are qualified to be WPAs; images of the WPA as miraculous and unflappable problem-solver deny the possibility that people “who may work at different paces and in different manners” can be equally effective (43). Such narratives risk “reifying unreasonable job expectations” that may further exclude disabled individuals as well as reinforcing the assumption that candidates for WPA work “all enter WPA positions with the same abilities, tools, and goals” (43). Vidali argues that such views of the ideal WPA coincide with a model in which health is a necessity for success and ultimately “only the fittest survive as WPAs” (40).

Vidali proposes alternatives to extant WPA narratives that open the door to more “interdependent” interaction that permits individuals to care for themselves and each other (40-41). Changes to the expectations WPAs have for themselves and each other can value such qualities as “productive designation of tasks to support teams” and acceptance of a wider range of communication options (43). Moving away from the WPA as hyper-able hero can also permit reflection on failure and an effective response to its inevitability (42). Vidali notes how her own depression served as a catalyst for increased attention to inclusiveness and access in her program and how its intersection with her WPA work alerted her to the ways that disability as metaphor for something that must be disguised rather than an embodied reality experienced by many limits WPAs’ options. She stresses her view that

disabling writing program administration isn’t only about disabled WPAs telling their stories: It’s about creating inclusive environments for all WPAs, not only at the time they are hired, but in ways that account for the embodied realities that come with time. (47)


1 Comment

T. Bourelle et al. Using Instructional Assistants in Online Classes. C&C, Sept. 2015. Posted 10/13/2015.

Bourelle, Tiffany, Andrew Bourelle, and Sherry Rankins-Robertson. “Teaching with Instructional Assistants: Enhancing Student Learning in Online Classes.” Computers and Composition 37 (2015): 90-103. Web. 6 Oct. 2015.

Tiffany Bourelle, Andrew Bourelle, and Sherry Rankins-Robertson discuss the “Writers’ Studio,” a pilot program at Arizona State University that utilized upper-level English and education majors as “instructional assistants” (IAs) in online first-year writing classes. The program was initiated in response to a request from the provost to cut budgets without affecting student learning or increasing faculty workload (90).

A solution was an “increased student-to-teacher ratio” (90). To ensure that the creation of larger sections met the goal of maintaining teacher workloads and respected the guiding principles put forward by the Conference on College Composition and Communication Committee for Best Practices in Online Writing Instruction in its March 2013 Position Statement, the team of faculty charged with developing the cost-saving measures supplemented “existing pedagogical strategies” with several innovations (91).

The writers note that one available cost-saving step was to avoid staffing underenrolled sections. To meet this goal, the team created “mega-sections” in which one teacher was assigned per each 96 students, the equivalent of a full-time load. Once the enrollment reached 96, a second teacher was assigned to the section, and the two teachers team-taught. T. Bourelle et al. give the example of a section of the second semester of the first-year sequence that enrolled at 120 students and was taught by two instructors. These 120 students were assigned to 15-student subsections (91).

T. Bourelle et al. note several reasons why the new structure potentially increased faculty workload. They cite research by David Reinheimer to the effect that teaching writing online is inherently more time-intensive than instructors may expect (91). Second, the planned curriculum included more drafts of each paper, requiring more feedback. In addition, the course design required multimodal projects. Finally, students also composed “metacognitive reflections” to gauge their own learning on each project (92).

These factors prompted the inclusion of the IAs. One IA was assigned to each 15-student group. These upper-level students contributed to the feedback process. First-year students wrote four drafts of each paper: a rough draft that received peer feedback, a revised draft that received comments from the IAs, an “editing” draft students could complete using the writing center or online resources, and finally a submission to the instructor, who would respond by either accepting the draft for a portfolio or returning it with directions to “revise and resubmit” (92). Assigning portfolio grades fell to the instructor. The authors contend that “in online classes where students write multiple drafts for each project, instructor feedback on every draft is simply not possible with the number of students assigned to any teacher, no matter how she manages her time” (93).

T. Bourelle et al. provide extensive discussion of the ways the IAs prepared for their roles in the Writers’ Studio. A first component was an eight-hour orientation in which the assistants were introduced to important teaching practices and concepts, in particular the process of providing feedback. Various interactive exercises and discussions allowed the IAs to develop their abilities to respond to the multimodal projects required by the Studio, such as blogs, websites, or “sound portraits” (94). The instruction for IAs also covered the distinction between “directive” and “facilitative” feedback, with the latter designed to encourage “an author to make decisions and [give] the writer freedom to make choices” (94).

Continuing support throughout the semester included a “portfolio workshop” that enabled the IAs to guide students in their production of the culminating eportfolio requirement, which required methods of assessment unique to electronic texts (95). Bi-weekly meetings with the instructors of the larger sections to which their cohorts belonged also provided the IAs with the support needed to manage their own coursework while facilitating first-year students’ writing (95).

In addition, IAs enrolled in an online internship that functioned as a practicum comparable to practica taken by graduate teaching assistants at many institutions (95-97). The practicum for the Writers’ Studio internship reinforced work on providing facilitative feedback but especially incorporated the theory and practice of online instruction (96). T. Bourelle et al. argue that the effectiveness of the practicum experience was enhanced by the degree to which it “mirror[ed]” much of what the undergraduate students were experiencing in their first-year classes: “[B]oth groups of beginners are working within initially uncomfortable but ultimately developmentally positive levels of ambiguity, multiplicity, and open-endedness” (Barb Blakely Duffelmeyer, qtd. in T. Bourelle et al. 96). Still quoting Duffelmeyer, the authors contend that adding computers “both enriched and problematized” the pedagogical experience of the coursework for both groups (96), imposing the need for special attention to online environments.

Internship assignments also gave the IAs a sense of what their own students would be experiencing by requiring an eportfolio featuring what they considered their best examples of feedback to student writing as well as reflective papers documenting their learning (98).

The IAs in the practicum critiqued the first-year curriculum, for example suggesting stronger scaffolding for peer review and better timing of assignments. They wrote various instructional materials to support the first-year course activities (97).

Their contributions to the first-year course included “[f]aciliting discussion groups” (98) and “[d]eveloping supportive relationships with first-year writers” (100), but especially “[r]esponding to revised drafts” (99). T. Bourelle et al. note that the IAs’ feedback differed from that of peer reviewers in that the IAs had acquired background in composition and rhetorical theory; unlike writing-center tutors, the IAs were more versed in the philosophy and expectations embedded in the course itself (99). IAs were particularly helpful to students who had misread the assignments, and they were able to identify and mentor students who were falling behind (98, 99).

The authors respond to the critique that the IAs represented uncompensated labor by arguing that the Writers’ Studio offered a pedagogically valuable opportunity that would serve the students well if they pursued graduate or professional careers as educators, emphasizing the importance of designing such programs to benefit the students as well as the university (101). They present student and faculty testimony on the effectiveness of the IAs as a means of “supplement[ing] teacher interaction” rather than replacing it (102). While they characterize the “monetary benefit” to the university as “small” (101), they consider the project “successful” and urge other “teacher-scholars to build on what we have tried to do” (102).