College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


3 Comments

Malek and Micciche. What Can Faculty Do about Dual-Credit? WPA, Spring 2017. Posted 08/03/2017.

Malek, Joyce, and Laura R. Micciche. “A Model of Efficiency: Pre-College Credit and the State Apparatus.” Journal of the Council of Writing Program Administrators 40.2 (2017): 77-97. Print.

Joyce Malek and Laura R. Micciche discuss the prevalence and consequences of dual and concurrent enrollment initiatives in universities and colleges as well as the effects of Advance Placement (AP) exemptions. They view these arrangements as symptoms of increased “managerial” control of higher education, resulting in an emphasis on efficiency and economics at the expense of learning (79).

As faculty at the University of Cincinnati, they recount the history of various dual-enrollment programs in Ohio. The state’s Postsecondary Enrollment Options program (PSEO), which originated in 1989, as of 2007 gave students as early as 9th and 10th grades the opportunity to earn both high school and college credits (81). A 2008 program, Seniors to Sophomores (STS), initiated by then-Governor Strickland, allowed high-school seniors to “spend their senior year on a participating Ohio college or university campus,” taking “a full load” for college credit (81-82).

After a poor response to STS from students who were unable or unwilling to dispense with a senior year at their regular high school, this program was eventually included in “College Credit Plus” (CCP), in which students beginning in grade seven can earn as many as 30 college credits yearly through courses taught at their high schools by high-school teachers. At the authors’ institution, records of applying students “are assessed holistically and are reviewed against a newly developed state benchmark” that declares them, in the words of the standard, to be “remediation free in a subject” (qtd. in Malek and Micciche 82). The authors state that they were “unable to trace the history of these standards” (83); they speculate that the language arose because students enrolling in the program had proved unable to succeed at college work (82).

Malek and Micciche report that these initiatives often required commitment from writing-program faculty; for example, writing faculty at their university were instructed, along with faculty from history, Spanish, French, and math, to develop programs certifying high-school teachers to teach college coursework (83). Writing faculty were given two weeks to provide this service, with no additional funding and without the ability to design curriculum. The initiative proved to include as well a range of additional unfunded duties, such as class observations and assessment (83-84).

The authors note that funding for all such initiatives is not guaranteed, suggesting that the programs may not survive. In contrast, they note, “AP [Advanced Placement] credit is institutionalized and is here to stay” (84).

The authors see AP as a means of achieving the managerial goals of the “technobureaucrats” (84, 90) increasingly in charge of higher education. They contend that a major objective of such policy makers is the development of a system that delivers students to the university system as efficiently as possible and at the lowest cost to the consumer (78-79). The authors recognize the importance of reducing the cost of higher education—they note that in-state students earning exemption through as many as 36 AP credits can save $11,000 a year in tuition, while out-of-state students can save up to $26, 334 (84). However, in their view, these savings, when applied to writing, come at the cost of both an opportunity to fully encounter the richness of writing as a means of communication and to acquire the kind of practice that results in a confident, capable writer who will succeed in complex academic and professional environments (87).

Malek and Micciche present their experience with AP to illustrate their claim that higher education has been taken out of the hands of faculty and programs and handed over to technocrats (85), a trend that they define as “an alarming statist creep” (85). In Ohio, communicating their intentions only to “staff not positioned to object,” such as advisors, the Board of Regents lowered the AP score deemed acceptable for exemption from a 4 to a 3 (78). This change, the authors write, was “not predicated . . . on any research whatsoever” (87). Its main purpose, in the authors’ view, was to channel students as quickly as possible into Ohio institutions and to reduce students’ actual investment in college to two years (79). Efforts to network in hopes of creating  “a cross-institutional objection to the change” came to naught (78).

Malek and Micciche document the growing incursion of AP into university programs by noting its rapid growth (88). Contending that few faculty know what is involved in AP scores, the authors question the ability of the AP organization to decide in what ways scores translate into “acceptable” coursework and note that to earn a score of 3, a student need correctly answer only “a little more than 50 percent” of the multiple choice questions on the exams (86).

Malek and Micciche express concern that the low status of first-year-composition as well as its nature as a required course makes it especially vulnerable to takeover by state and managerial forces (89-90). Such takeover results in the loss of faculty positions and illustrates the “limited rhetorical power” of writing professionals, who have not succeeded in finding a voice in policy decisions and find themselves in “a reactive stance” in which they ultimately enable the managerial agenda (88-89). They find it unlikely that proposals for enhancing the status of writing studies in general will speak to the economic goals of policy makers outside of the field (90).

Similarly, they contend that “refus[ing] to participate” in the development of dual-credit initiatives will not stem the tide of such programs (92). An alternative is to become deeply involved in making sure that training for teachers in AP or dual or concurrent enrollment programs is as rich and theoretically informed as possible (92).

As a more productive means of strengthening the rhetorical agency of writing faculty, Malek and Micciche suggest “coalition-building” across a wide range of stakeholders (90). They illustrate such coalition-building with other colleges by presenting their alliance with the university’s College of Allied Health Sciences (CAHS) to design curricula to help students in CAHS courses improve as writers in their field (90-91). In their view, enlisting other disciplines in this way reinforces the importance of writing and should be seen “as a good thing” (91).

Also, noting that businesses spend “over 3 billion dollars annually to address writing deficiencies” (91), Malek and Micciche advocate for connections with local businesses, suggesting that managerial policy makers will be responsive to arguments about students’ need for “job readiness” (92).

Finally, they suggest enlisting students in efforts to lobby for the importance of college writing. They cite a study asking students to compare their AP courses with subsequent experiences in a required first-year-composition course. Results showed that the AP courses was not a substitute for the college course (93). To build this coalition with students, the authors advocate asking students about their needs and, in response, possibly imagining a “refashioned idea of FYC,” even if doing so means that “we might have to give up some of our most cherished beliefs and values and further build on our strengths” (93).


Leave a comment

Wooten et al. SETs in Writing Classes. WPA, Fall 2016. Posted 02/11/2016.

Wooten, Courtney Adams, Brian Ray, and Jacob Babb. “WPAs Reading SETs: Toward an Ethical and Effective Use of Teaching Evaluations.” Journal of the Council of Writing Program Administrators 40.1 (2016): 50-66. Print.

Courtney Adams Wooten, Brian Ray, and Jacob Babb report on a survey examining the use of Student Evaluations of Teaching (SETs) by writing program administrators (WPAs).

According to Wooten et al., although WPAs appear to be dissatisfied with the way SETs are generally used and have often attempted to modify the form and implementation of these tools for evaluating teaching, they have done so without the benefit of a robust professional conversation on the issue (50). Noting that much of the research they found on the topic came from areas outside of writing studies (63), the authors cite a single collection on using SETs in writing programs by Amy Dayton that recommends using SETs formatively and as one of several measures to assess teaching. Beyond this source, they cite “the absence of research on SETs in our discipline” as grounds for the more extensive study they conducted (51).

The authors generated a list of WPA contact information at more than 270 institutions, ranging from two-year colleges to private and parochial schools to flagship public universities, and solicited participation via listservs and emails to WPAs (51). Sixty-two institutions responded in summer 2014 for a response rate of 23%; 90% of the responding institutions were four-year institutions.

Despite this low response rate, the authors found the data informative (52). They note that the difficulty in recruiting faculty responses from two-year colleges may have resulted from problems in identifying responsible WPAs in programs where no specific individual directed a designated writing program (52).

Their survey, which they provide, asked demographic and logistical questions to establish current practice regarding SETs at the responding institutions as well as questions intended to elicit WPAs’ attitudes toward the ways SETs affected their programs (52). Open-ended questions allowed elaboration on Likert-scale queries (52).

An important recurring theme in the responses involved the kinds of authority WPAs could assert over the type and use of SETs at their schools. Responses indicated that the degree to which WPAs could access student responses and could use them to make hiring decisions varied greatly. Although 76% of the WPAs could read SETS, a similar number indicated that department chairs and other administrators also examined the student responses (53). For example, in one case, the director of a first-year-experience program took primary charge of the evaluations (53). The authors note that WPAs are held accountable for student outcomes but, in many cases, cannot make personnel decisions affecting these outcomes (54).

Wooten et al. report other tensions revolving around WPAs’ authority over tenured and tenure-track faculty; in these cases, surveyed WPAs often noted that they could not influence either curricula nor course assignments for such faculty (54). Many WPAs saw their role as “mentoring” rather than “hiring/firing.” The WPAs were obliged to respond to requests from external authorities to deal with poor SETs (54); the authors note a “tacit assumption . . . that the WPA is not capable of interpreting SET data, only carrying out the will of the university” (54). They argue that “struggles over departmental governance and authority” deprive WPAs of the “decision-making power” necessary to do the work required of them (55).

The survey “revealed widespread dissatisfaction” about the ways in which SETs were administered and used (56). Only 13% reported implementing a form specific to writing; more commonly, writing programs used “generic” forms that asked broad questions about the teacher’s apparent preparation, use of materials, and expertise (56). The authors contend that these “indirect” measures do not ask about practices specific to writing and may elicit negative comments from students who do not understand what kinds of activities writing professionals consider most beneficial (56).

Other issues of concern include the use of online evaluations, which provide data that can be easily analyzed but result in lower participation rates (57). Moreover, the authors note, WPAs often distrust numerical data without the context provided by narrative responses, to which they may or may not have access (58).

Respondents also noted confusion or uncertainty about how an institution determines what constitutes a “good” or “poor” score. Many of these decisions are determined by comparing an individual teacher’s score to a departmental or university-wide average, with scores below the average signaling the need for intervention. The authors found evidence that even WPAs may fail to recognize that lower scores can be influenced not just by the grade the student expects but also by gender, ethnicity, and age, as well as whether the course is required (58-59).

Wooten et al. distinguish between “teaching effectiveness,” a basic measure of competence, and “teaching excellence,” practices and outcomes that can serve as benchmarks for other educators (60). They note that at many institutions, SETs appear to have little influence over recognition of excellence, for example through awards or commendations; classroom observations and teaching portfolios appear to be used more often for these determinations. SETs, in contrast, appear to have a more “punitive” function (61), used more often to single out teachers who purportedly fall short in effectiveness (60).

The authors note the vulnerability of contingent and non-tenure-track faculty to poorly implemented SETs and argue that a climate of fear occasioned by such practices can lead to “lenient grading and lowered demands” (61). They urge WPAs to consider the ethical implications of the use of SETs in their institutions.

Recommendations include “ensuring high response rates” through procedures and incentives; clarifying and standardizing designations of good and poor performance and ensuring transparency in the procedures for addressing low scores; and developing forms specific to local conditions and programs (61-62). Several of the recommendations concern increasing WPA authority over hiring and mentoring teachers, including tenure-track and tenured faculty. Wooten et al. recommend that all teachers assigned to writing courses administer writing-specific evaluations and be required to act on the information these forms provide; the annual-report process can allow tenured faculty to demonstrate their responsiveness (62).

The authors hope that these recommendations will lead to a ‘disciplinary discussion” among WPAs that will guide “the creation of locally appropriate evaluation forms that balance the needs of all stakeholders—students, teachers, and administrators” (63).


1 Comment

West-Puckett, Stephanie. Digital Badging as Participatory Assessment. CE, Nov. 2016. Posted 11/17/2016.

Stephanie West-Puckett presents a case study of the use of “digital badges” to create a local, contextualized, and participatory assessment process that works toward social justice in the writing classroom.

She notes that digital badges are graphic versions of those earned by scouts or worn by members of military groups to signal “achievement, experience, or affiliation in particular communities” (130). Her project, begun in Fall 2014, grew out of Mozilla’s free Open Badging Initiative and the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC) that funded grants to four universities as well as to museums, libraries, and community partnerships to develop badging as a way of recognizing learning (131).

West-Puckett employed badges as a way of encouraging and assessing student engagement in the outcomes and habits of mind included in such documents as the Framework for Success in Postsecondary Writing, the Outcomes Statements for First-Year Composition produced by the Council of Writing Program Administrators, and her own institution’s outcomes statement (137). Her primary goal is to foster a “participatory” process that foregrounds the agency of teachers and students and recognizes the ways in which assessment can influence classroom practice. She argues that such participation in designing and interpreting assessments can address the degree to which assessment can drive bias and limit access and agency for specific groups of learners (129).

She reviews composition scholarship characterizing most assessments as “top-down” (127-28). In these practices, West-Puckett argues, instruments such as rubrics become “fetishized,” with the result that they are forced upon contexts to which they are not relevant, thus constraining the kinds of assignments and outcomes teachers can promote (134). Moreover, assessments often fail to encourage students to explore a range of literacies and do not acknowledge learners’ achievements within those literacies (130). More valid, for West-Puckett, are “hyperlocal” assessments designed to help teachers understand how students are responding to specific learning opportunities (134). Allowing students to join in designing and implementing assessments makes the learning goals visible and shared while limiting the power of assessment tools to marginalize particular literacies and populations (128).

West-Puckett contends that the multimodal focus in writing instruction exacerbates the need for new modes of assessment. She argues that digital badges partake of “the primacy of visual modes of communication,” especially for populations “whose bodies were not invited into the inner sanctum of a numerical and linguistic academy” (132). Her use of badges contributes to a form of assessment that is designed not to deride writing that does not meet the “ideal text” of an authority but rather to enlist students’ interests and values in “a dialogic engagement about what matters in writing” (133).

West-Puckett argues for pairing digital badging with “critical validity inquiry,” in which the impact of an assessment process is examined through a range of theoretical frames, such as feminism, Marxism, or queer or disability theory (134). This inquiry reveals assessment’s role in sustaining or potentially disrupting entrenched views of what constitutes acceptable writing by examining how such views confer power on particular practices (134-35).

In West-Puckett’s classroom in a “mid-size, rural university in the south” with a high percentage of students of color and first-generation college students (135), small groups of students chose outcomes from the various outcomes statements, developed “visual symbols” for the badges, created a description of the components and value of the outcomes for writing, and detailed the “evidence” that applicants could present from a range of literacy practices to earn the badges (137). West-Puckett hoped that this process would decrease the “disconnect” between her understanding of the outcomes and that of students (136), as well as engage students in a process that takes into account the “lived consequences of assessment” (141): its disparate impact on specific groups.

The case study examines several examples of badges, such as one using a compass to represent “rhetorical knowledge” (138). The group generated multimodal presentations, and applicants could present evidence in a range of forms, including work done outside of the classroom (138-39). The students in the group decided whether or not to award the badge.

West-Puckett details the degree to which the process invited “lively discussion” by examining the “Editing MVP” badge (139). Students defined editing as proofreading and correcting one’s own paper but visually depicted two people working together. The group refused the badge to a student of color because of grammatical errors but awarded it to another student who argued for the value of using non-standard dialogue to show people “‘speaking real’ to each other” (qtd. in West-Puckett 140). West-Puckett recounts the classroom discussion of whether editing could be a collaborative effort and when and in what contexts correctness matters (140).

In Fall 2015, West-Puckett implemented “Digital Badging 2.0” in response to her concerns about “the limited construct of good writing some students clung to” as well as how to develop “badging economies that asserted [her] own expertise as a writing instructor while honoring the experiences, viewpoints, and subject positions of student writers” (142). She created two kinds of badging activities, one carried out by students as before, the other for her own assessment purposes. Students had to earn all the student-generated badges in order to pass, and a given number of West-Puckett’s “Project Badges” to earn particular grades (143). She states that she privileges “engagement as opposed to competency or mastery” (143). She maintains that this dual process, in which her decision-making process is shared with the students who are simultaneously grappling with the concepts, invites dialogue while allowing her to consider a wide range of rhetorical contexts and literacy practices over time (144).

West-Puckett reports that although she found evidence that the badging component did provide students an opportunity to take more control of their learning, as a whole the classes did not “enjoy” badging (145). They expressed concern about the extra work, the lack of traditional grades, and the responsibility involved in meeting the project’s demands (145). However, in disaggregated responses, students of color and lower-income students viewed the badge component favorably (145). According to West-Puckett, other scholars have similarly found that students in these groups value “alternative assessment models” (146).

West-Puckett lays out seven principles that she believes should guide participatory assessment, foregrounding the importance of making the processes “open and accessible to learners” in ways that “allow learners to accept or refuse particular identities that are constructed through the assessment” (147). In addition, “[a]ssessment artifacts,” in this case badges, should be “portable” so that students can use them beyond the classroom to demonstrate learning (148). She presents badges as an assessment tool that can embody these principles.


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).


1 Comment

Boyle, Casey. Rhetoric and/as Posthuman Practice. CE, July 2016. Posted 08/06/2016.

Boyle, Casey. “Writing and Rhetoric and/as Posthuman Practice.” College English 78.6 (2016): 532-54. Print.

Casey Boyle examines the Framework for Success in Postsecondary Writing, issued by the Council of Writing Program Administrators, the National Council of Teachers of English, and the National Writing Project, in light of its recommendation that writing instruction encourage the development of “habits of mind” that result in enhanced learning.

Boyle focuses especially on the Framework‘s attention to “metacognition,” which he finds to be largely related to “reflection” (533). In Boyle’s view, when writing studies locates reflection at the center of writing pedagogy, as he argues it does, the field endorses a set of “bad habits” that he relates to a humanist mindset (533). Boyle proposes instead a view of writing and writing pedagogy that is “ecological” and “posthuman” (538). Taking up Kristine Johnson’s claim that the Framework opens the door to a revitalization of “ancient rhetorical training.” Boyle challenges the equation of such training with a central mission of social and political critique (534).

Boyle recounts a history of writing pedagogy beginning with “current-traditional rhetoric” as described by Sharon Crowley and others as the repetitive practice of form (535). Rejection of this pedagogy resulted in a shift toward rhetorical and writing education as a means of engaging students with their social and political surroundings. Boyle terms this focus “current-critical rhetoric” (536). Its primary aim, he argues, is to increase an individual’s agency in that person’s dealings with his or her cultural milieu, enhancing the individual’s role as a citizen in a democratic polity (536).

Boyle critiques current-critical rhetoric, both in its approach to the self and in its insistence on the importance of reflection as a route to critical awareness, for its determination to value the individual’s agency over the object, which is viewed as separate from the acting self (547). Boyle cites Peter Sloterdijk’s view that the humanist sense of a writing self manifests itself in the “epistle or the letter to a friend” that demonstrates the existence of a coherent identity represented by the text (537). Boyle further locates a humanist approach in the “reflective letter assignments” that ask students to demonstrate their individual agency in choosing among many options as they engage in rhetorical situations (537).

To develop the concept of the “ecological orientation” (538) that is consistent with a posthumanist mindset, Boyle explores a range of iterations of posthumanism, which he stresses is not be understood as “after the human” (539). Rather, quoting N. Katherine Hayles, Boyle characterizes posthumanism as “the end of a certain conception of the human” (qtd. in Boyle 539). Central posthumanism is the idea of human practices as one component of a “mangled assemblage” of interactions among both human and nonhuman entities (541) in which separation of subject and object become impossible. In this view, “rhetorical training” would become “an orchestration of ecological relations” (539), in which practices within a complex of technologies and environments, some of them not consciously summoned, would emerge from the relations and shape future practices and relations.

Boyle characterizes this understanding of practice as a relation of “betweenness among what was previously considered the human and the nonhuman” (540; emphasis in original). He applies Andrew Pickering’s metaphor of practice as a “reciprocal tuning of people and things” (541). In such an orientation, “[t]heory is a practice” that “is continuous with and not separate from the mediation of material ecologies” (542). Practice becomes an “ongoing tuning” (542) that functions as a “way of becoming” (Robert Yagelski, qtd. in Boyle 538; emphasis in original).

In Boyle’s view, the Framework points toward this ecological orientation in stressing the habit of “openness” to “new ways of being” (qtd. in Boyle 541). In addition, the Framework envisions students “writing in multiple environments” (543; emphasis in Boyle). Seen in a posthuman light, such multiple exposures redirect writers from the development of critical awareness to, in Pickering’s formulation, knowledge understood as a “sensitivity” to the interactions of ecological components in which actors both human and nonhuman are reciprocally generative of new forms and understandings (542). Quoting Isabelle Stengers, Boyle argues that “an ecology of practices does not have any ambition to describe things ‘as they are’ . . . but as they may become” (qtd. in Boyle 541).

In Boyle’s formulation, agency becomes “capacity,” which is developed through repeated practice that then “accumulates prior experience” to construct a “database of experience” that establishes the habits we draw on to engage productively with future environments (545). Such an accumulation comes to encompass, in the words of Collin Brooke, “all of the ‘available means'” (qtd. in Boyle 549), not all of them visible to conscious reflection, (544) through which we can affect and be affected by ongoing relations in rhetorical situations.

Boyle embodies such practice in the figure of the archivist “whose chief task is to generate an abundance of relations” rather than that of the letter writer (550), thus expanding options for being in the world. Boyle emphasizes that the use of practice in this way is “serial” in that each reiteration is both “continuous” and “distinct,” with the components of the series “a part of, but also apart from, any linear logic that might be imposed” (547): “Practice is the repetitive production of difference” (547). Practice also becomes an ethics that does not seek to impose moral strictures (548) but rather to enlarge and enable “perception” and “sensitivities” (546) that coalesce, in the words of Rosi Braidotti, in a “pragmatic task of self-transformation through humble experimentation” (qtd. in Boyle 539).

Boyle connects these endeavors to rhetoric’s historical allegiance to repetition through sharing “common notions” (Giles Deleuze, qtd. in Boyle 550). Persuasion, he writes, “occurs . . . not as much through rational appeals to claims but through an exercise of material and discursive forms” (550), that is, through relations enlarged by habits of practice.

Related to this departure from conscious rational analysis is Boyle’s proposed posthuman recuperation of “metacognition,” which he states has generally been perceived to involve analysis from a “distance or remove from an object to which one looks” (551). In Boyle’s view, metacognition can be understood more productively through a secondary meaning that connotes “after” and “among” (551). Similarly, rhetoric operates not in the particular perception arising from a situated moments but “in between” the individual moment and the sensitivities acquired from experience in a broader context (550; emphasis original):

[R]hetoric, by attending more closely to practice and its nonconscious and nonreflective activity, reframes itself by considering its operations as exercises within a more expansive body of relations than can be reduced to any individual human. (552).

Such a sensibility, for Boyle, should refigure writing instruction, transforming it into “a practice that enacts a self” (537) in an ecological relation to that self’s world.

 


Leave a comment

Obermark et al. New TA Development Model. WPA, Fall 2015. Posted 02/08/2016.

Obermark, Lauren, Elizabeth Brewer, and Kay Halasek. “Moving from the One and Done to a Culture of Collaboration: Revising Professional Development for TAs.” Journal of the Council of Writing Program Administrators 39.1 (2015): 32-53. Print.

Lauren Obermark, Elizabeth Brewer, and Kay Halasek detail a professional development model for graduate teaching assistants (TAs) that was established at their institution to better meet the needs of both beginning and continuing TAs. Their model responded to the call from E. Shelley Reid, Heidi Estrem, and Marcia Belcheir to “[g]o gather data—not just impressions—from your own TAs” in order to understand and foreground local conditions (qtd. in Obermark et al. 33).

To examine and revise their professional development process beginning in 2011 and continuing through 2013, Obermark et al. conducted a survey of current TAs, held focus groups, and surveyed “alumni” TAs to determine TAs’ needs and their reactions to the support provided by the program (35-36).

An exigency for Obermark et al. was the tendency they found in the literature to concentrate TA training on the first semester of teaching. They cite Beth Brunk-Chavez to note that this tendency gives short shrift to the continuing concerns and professional growth of TAs as they advance from their early experiences in first-year writing to more complex teaching assignments (33). As a result of their research, Obermark et al. advocate for professional development that is “collaborative,” “ongoing,” and “distributed across departmental and institutional locations” (34).

The TA program in place at the authors’ institution prior to the assessment included a week-long orientation, a semester’s teaching practicum, a WPA class observation, and a syllabus built around a required textbook (34). After their first-year, TAs were able to move on to other classes, particularly the advanced writing class, which fulfills a general education requirement across the university and is expected to provide a more challenging writing experience, including a “scaffolded research project” (35). Obermark et al. found that while students with broader teaching backgrounds were often comfortable with designing their own syllabus to meet more complex pedagogical requirements, many TAs who had moved from the well-supported first-year course to the second wished for more guidance than they had received (35).

Consulting further scholarship by Estrem and Reid led Obermark et al. to act on “a common error” in professional development: failing to conduct a “needs assessment” by directly asking questions designed to determine, in the words of Kathleen Blake Yancey, “the characteristics of the TAs for whom the program is designed” (qtd. in Obermark et al. 36-37). The use of interview methodology through focus groups not only instilled a collaborative ethos, it also permitted the authors to plan “developmentally appropriate PD” and provided TAs with what the authors see as a rare opportunity to reflect on their experiences as teachers. Obermark et al. stress that this fresh focus on what Cynthia Selfe and Gail Hawisher call a “participatory model of research” (37) allowed the researchers to demonstrate their perceptions of the TAs as professional colleagues, leading the TAs themselves “to identify more readily as professionals” (37).

TAs’ sense of themselves as professionals was further strengthened by the provision of “ongoing” support to move beyond what Obermark et al. call “the one and done” model (39). Through the university teaching center, they encountered Jody Nyquist and Jo Sprague’s theory of three stages of TA development: “senior learners” who “still identify strongly with students”; “colleagues in training” who have begun to recognize themselves as teachers; and “junior colleagues” who have assimilated their professional identities to the point that they “may lack only the formal credentials” (qtd. in Obermark et al. 39). Obermark et al. note that their surveys revealed, as Nyquist and Sprague predicted, that their population comprised TAs at all three levels as they moved through these stages at different rates (39-40).

The researchers learned that even experienced TAs still often had what might have been considered basic questions about the goals of the more advanced course and how to integrate the writing process into the course’s general education outcomes (40). The research revealed that as TAs moved past what Nyquist and Sprague denoted the “survival” mode that tends to characterize a first year of teaching, they began to recognize the value of composition theory and became more invested in applying theory to their teaching (39). That 75% of the alumni surveyed were teaching writing in their institutions regardless of their actual departmental positions reinforced the researchers’ certainty and the TAs’ awareness that composition theory and practice would be central to their ongoing academic careers (40).

Refinements included a more extensive schedule of optional workshops and a “peer-to-peer” program that responded to TA requests for more opportunities to observe and interact with each other. Participating TAs received guidance on effective observation processes and feedback; subsequent expansion of this program offered TAs opportunities to share designing assigning assignments and grading as well (42).

The final component of the new professional-development model focused on expanding the process of TA support across both the English department and the wider university. Obermark et al. indicate that many of the concerns expressed by TAs addressed not just teaching writing with a composition-studies emphasis but also teaching more broadly in areas that “did not fall neatly under our domain as WPAs and specialists in rhetoric and composition” (43). For example, TAs asked for more guidance in working with students’ varied learning styles and, in particular, in meeting the requirement for “social diversity” expressed in the general-education outcomes for the more advance course (44). Some alumni TAs reported wishing for more help teaching in other areas within English, such as in literature courses (45).

The authors designed programs featuring faculty and specialists in different pedagogical areas, such as diversity, as well as workshops and break-outs in which TAs could explore kinds of teaching that would apply across the many different environments in which they found themselves as professionals (45). Obermark et al. note especially the relationship they established with the university teaching center, a collaboration that allowed them to integrate expertise in composition with other philosophies of teaching and that provided “allies in both collecting data and administering workshops for which we needed additional expertise” (45). Two other specific benefits from this partnership were the enhanced “institutional memory” that resulted from inclusion of a wider range of faculty and staff and increased sustainability for the program as a larger university population became invested in the effort (45-46).

Obermark et al. provide their surveys and focus-group questions, urging other WPAs to engage TAs in their own development and to relate to them “as colleagues in the field rather than novices in need of training, inoculation, or the one and done approach” (47).


Leave a comment

Anderson et al. Contributions of Writing to Learning. RTE, Nov. 2015. Posted 12/17/2015.

Anderson, Paul, Chris M. Anson, Robert M. Gonyea, and Charles Paine. “The Contributions of Writing to Learning and Development: Results from a Large-Scale, Multi-institutional Study.” Research in the Teaching of English 50.2 (2015): 199-235. Print

Note: The study referenced by this summary was reported in Inside Higher Ed on Dec. 4, 2015. My summary may add some specific details to the earlier article and may clarify some issues raised in the comments on that piece. I invite the authors and others to correct and elaborate on my report.

Paul Anderson, Chris M. Anson, Robert M. Gonyea, and Charles Paine discuss a large-scale study designed to reveal whether writing instruction in college enhances student learning. They note widespread belief both among writing professionals and other stakeholders that including writing in curricula leads to more extensive and deeper learning (200), but contend that the evidence for this improvement is not consistent (201-02).

In their literature review, they report on three large-scale studies that show increased student learning in contexts rich in writing instruction. These studies concluded that the amount of writing in the curriculum improved learning outcomes (201). However, these studies contrast with the varied results from many “small-scale, quasi-experimental studies that examine the impact of specific writing interventions” (200).

Anderson et al. examine attempts to perform meta-analyses across such smaller studies to distill evidence regarding the effects of writing instruction (202). They postulate that these smaller studies often explore such varied practices in so many diverse environments that it is hard to find “comparable studies” from which to draw conclusions; the specificity of the interventions and the student populations to which they are applied make generalization difficult (203).

The researchers designed their investigation to address the disparity among these studies by searching for positive associations between clearly designated best practices in writing instruction and validated measures of student learning. In addition, they wanted to know whether the effects of writing instruction that used these best practices differed from the effects of simply assigning more writing (210). The interventions and practices they tested were developed by the Council of Writing Program Administrators (CWPA), while the learning measures were those used in the National Survey of Student Engagement (NSSE). This collaboration resulted from a feature of the NSSE in which institutions may form consortia to “append questions of specific interest to the group” (206).

Anderson et al. note that an important limitation of the NSSE is its reliance on self-report data, but they contend that “[t]he validity and reliability of the instrument have been extensively tested” (205). Although the institutions sampled were self-selected and women, large institutions, research institutions, and public schools were over-represented, the authors believe that the overall diversity and breadth of the population sampled by the NSSE/CWPA collaboration, encompassing more than 70,000 first-year and senior students, permits generalization that has not been possible with more narrowly targeted studies (204).

The NSSE queries students on how often they have participated in pedagogic activities that can be linked to enhanced learning. These include a wide range of practices such as service-learning, interactive learning, “institutionally challenging work” such as extensive reading and writing; in addition, the survey inquires about campus features such as support services and relationships with faculty as well as students’ perceptions of the degree to which their college experience led to enhanced personal development. The survey also captures demographic information (205-06).

Chosen as dependent variables for the joint CWPA/NSSE study were two NSSE scales:

  • Deep Approaches to Learning, which encompassed three subscales, Higher-Order Learning, Integrative Learning, and Reflective Learning. This scale focused on activities related to analysis, synthesis, evaluation, combination of diverse sources and perspectives, and awareness of one’s own understanding of information (211).
  • Perceived Gains in Learning and Development, which involved subscales of Practical Competence such as enhanced job skills, including the ability to work with others and address “complex real-world problems”; Personal and Social Development, which inquired about students’ growth as independent learners with “a personal code of values and ethics” able to “contribut[e] to the community”; and General Education Learning, which includes the ability to “write and speak clearly and effectively, and to think critically and analytically” (211).

The NSSE also asked students for a quantitative estimate of how much writing they actually did in their coursework (210). These data allowed the researchers to separate the effects of simply assigning more writing from those of employing different kinds of writing instruction.

To test for correlations between pedagogical choices in writing instruction and practices related to enhanced learning as measured by the NSSE scales, the research team developed a “consensus model for effective practices in writing” (206). Eighty CWPA members generated questions that were distilled to 27 divided into “three categories based on related constructs” (206). Twenty-two of these ultimately became part of a module appended to the NSSE that, like the NSSE “Deep Approaches to Learning” scale, asked students how often their coursework had included the specific activities and behaviors in the consensus model. The “three hypothesized constructs for effective writing” (206) were

  • Interactive Writing Processes, such as discussing ideas and drafts with others, including friends and faculty;
  • Meaning-Making Writing Tasks, such as using evidence, applying concepts across domains, or evaluating information and processes; and
  • Clear Writing Expectations, which refers to teacher practices in making clear to students what kind of learning an activity promotes and how student responses will be assessed. (206-07)

They note that no direct measures of student learning is included in the NSSE, nor are such measures included in their study (204). Rather, in both the writing module and the NSSE scale addressing Deep Approaches to Learning, students are asked to report on kinds of assignments, instructor behaviors and practices, and features of their interaction with their institutions, such as whether they used on-campus support services (205-06). The scale on Perceived Gains in Learning and Development asks students to self-assess (211-12).

Despite the lack of specific measures of learning, Anderson et al. argue that the curricular content included in the Deep Approaches to Learning scale does accord with content that has been shown to result in enhanced student learning (211, 231). The researchers argue that comparisons between the NSSE scales and the three writing constructs allow them to detect an association between the effective writing practices and the attitudes toward learning measured by the NSSE.

Anderson et al. provide detailed accounts of their statistical methods. In addition to analysis for goodness-of-fit, they performed “blocked hierarchical regressions” to determine how much of the variance in responses was explained by the kind of writing instruction reported versus other factors, such as demographic differences, participation in various “other engagement variables” such as service-learning and internships, and the actual amount of writing assigned (212). Separate regressions were performed on first-year students and on seniors (221).

Results “suggest[ed] that writing assignments and instructional practices represented by each of our three writing scales were associated with increased participation in Deep Approaches to Learning, although some of that relationship was shared by other forms of engagement” (222). Similarly, the results indicate that “effective writing instruction is associated with more favorable perceptions of learning and development, although other forms of engagement share some of that relationship” (224). In both cases, the amount of writing assigned had “no additional influence” on the variables (222, 223-24).

The researchers provide details of the specific associations among the three writing constructs and the components of the two NSSE scales. Overall, they contend, their data strongly suggest that the three constructs for effective writing instruction can serve “as heuristics that instructors can use when designing writing assignments” (230), both in writing courses and courses in other disciplines. They urge faculty to describe and research other practices that may have similar effects, and they advocate additional forms of research helpful in “refuting, qualifying, supporting, or refining the constructs” (229). They note that, as a result of this study, institutions can now elect to include the module “Experiences with Writing,” which is based on the three constructs, when students take the NSSE (231).