College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Wilkinson, Caroline. Collaboration in Dual-Credit Programs. WPA, Spring 2019. Posted 07/14/2019.

Wilkinson, Caroline. “From Dialogue to Collaboration in Dual-Credit Programs.” Journal of the Council of Writing Program Administrators 42.2 (2019): 80-99. Print.

Caroline Wilkinson addresses tensions that arise when universities implement dual-credit courses (credit-bearing college courses taught in a high-school environment). She draws on the experiences of two high-school teachers involved in a dual-credit program at the University of Louisville, a large Midwestern/Southern institution (83).

Wilkinson cites statistics showing that nationally, 1.4 million high-school students take dual-credit courses, with 77% of these courses taught at the high school and 45% of those located at the high school taught by high-school teachers (80). She attributes “a real anxiety” to “[m]any composition educators” with regard to staffing dual-credit courses with secondary teachers (80). Having taught dual-credit courses herself, Wilkinson had “observed the very real differences in the contexts and cultures” that separate high school and college (83). Research indicates that the quality of dual-credit programs varies and that the benefits to students depend on various factors; “nonwhite” and female students seem to gain from the experience (81).

Acknowledging that composition scholarship has begun to consider the roles of the high-school teachers recruited to teach these courses, Wilkinson writes that scholarship specifically dealing with these teachers’ experiences is “limited” (81). Among her goals is to bring teachers’ voices into the discussion.

Students accepted into dual-credit courses at the University of Louisville met a number of criteria, including a 3.0 GPA and minimum entrance scores on standardized tests; they had to be nominated by their English teacher and approved by a counselor (83). “Most” teachers in the program had a Master’s in English or 18 hours of graduate English credit. “Emma” and “Daphne” were the only teachers at their high school to meet these criteria (84).

Instructors also took the university’s “Teaching College Composition” course and to attend the summer orientation. In addition, they taught a standard syllabus and used pedagogical materials, including major assignments, provided by the university (83).

An important question for Wilkinson is “Can dual-credit courses be equivalent without being identical?” (88). She notes scholarship addressing the contextual differences between high school and college. Dual-credit students attend a year-long course with peers they already know rather than a semester-long course requiring them to build community with new acquaintances through the course itself (87-88). Daily class meetings also allow students more contact with instructors (84).

Wilkinson notes ways in which Emma and Daphne’s need to function within the full-time environment of the high-school community contributed to these differences. Differences in academic-year start dates meant that the two high-school teachers could not attend the full summer orientation (83). Similarly, the longer academic year meant that the graduate teaching assistants who attended the practicum course with Daphne and Emma completed the curriculum in less time than they did, a difference that made it hard for Daphne and Emma to make the best use of information covered in the fall semester but applied later in the year (85). The high-school teachers lacked the contact with other instructors teaching the same material and could not fully avail themselves of office hours and other support from the university writing program administrator (86). These teachers found that their workload made it harder for them to give the kinds of individualized responses they felt the college work called for (87). For these reasons, Wilkinson concludes that the courses were not “identical” (88).

However, she argues that they were “equivalent” (88, 94). The high-school course followed the same syllabus and used the same materials as the university-based version. The teachers received the same training as on-campus graduate assistants and “had a supportive WPA” (88). Both teachers and students recognized the unique features the college course offered, such as many useful materials and a more interactive environment (84, 86). Moreover, the high-school students had access to the university library and writing center and met new requirements, such as the use of outside sources, for their assignments (87).

Wilkinson expresses concern that “equivalence” in these respects does not align with scholarship that urges universities and high-schools to see dual-credit programs as a “partnership” (88). Instead, in Wilkinson’s view, the relationship is “unidirectional,” with the university setting the contexts and terms (89).

Thus, despite administrative support, the two teachers felt “separate from” and “different” from the teaching community embodied by the Teaching College Composition course (90). Operating on a different time schedule, which meant separation into a distinct “mentoring group” (90), was one factor in this sense of isolation; another important factor, for Wilkinson, is that the course addressed issues faced by graduate assistants as first-time teachers, while Daphne and Emma had many years of teaching experience behind them and had very different needs (89). Wilkinson calls for a more explicit “bilateral” partnership in which the expertise of the high-school teachers is recognized and drawn on in the design and implementation of a dual-credit course (91).

Wilkinson considers taking the Teaching College Composition course “formal professionalization” into composition studies for the high-school teachers (92). In her view, this professionalization process creates problems for both the teachers and for composition as a field. Because of their inability to develop community within the university program and their earlier professionalization as high-school English teachers focusing on literature rather than writing, the teachers did not see themselves as true college instructors (91). Wilkinson raises concern that positioning high-school teachers as competent to teach college writing may mean that “the long-fought for professionalization of the field is at risk” (93). First-year enrollments that form the staple of many writing programs may also suffer, resulting in fewer composition jobs. Finally, composition scholarship may cease to address first-year writing if it is delegated to the schools (93-94).

Wilkinson addresses remedies for WPAs dealing with dual-credit pressures. Noting that programs vary in the amount of resources they can devote to developing a successful dual-credit partnership (95), she urges that universities designate specific faculty as point-people for such efforts (96). She writes that mentorships can be more accommodating to the teachers’ schedules, but must be paired with coursework that introduces composition theory (93). Mentorships between new dual-credit teachers and more experienced ones can provide a stronger sense of community (96). Importantly, in her view, the teachers themselves can be included more fully in the development and implementation of these programs. Ideally, “dual credit programs provide an access point where high school and college instructors can work to collaborate on writing pedagogy and professionalization” (97).


Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


Sills, Ellery. Creating “Outcomes 3.0.” CCC, Sept. 2018. Posted 10/24/2018.

Sills, Ellery. “Making Composing Policy Audible: A Genealogy of the WPA Outcomes Statement 3.0.” College Composition and Communication 70.1 (2018): 57-81. Print.

Ellery Sills provides a “genealogy” of the deliberations involved in the development of “Outcomes 3.0,” the third revision of the Council of Writing Program Administrators’ Outcome Statement for First-Year Composition (58). His starting point is “Revising FYC Outcomes for a Multimodal, Digitally Composed World,” a 2014 article by six of the ten composition faculty who served on the task force to develop Outcomes (OS) 3.0 (57).

Sills considers the 2014 article a “perfectly respectable history” of the document (58), but argues that such histories do not capture the “multivocality” of any policymaking process (59). He draws on Chris Gallagher to contend that official documents like the three Outcomes Statements present a finished product that erases debates and disagreements that go into policy recommendations (59). Sills cites Michel Foucault’s view that, in contrast, a genealogy replaces “the monotonous finality” (qtd. in Sills 59) of a history by “excavat[ing] the ambiguities” that characterized the deliberative process (59).

For Sills, Outcomes 3.0 shares with previous versions of the Outcomes Statement the risk that it will be seen as “hegemonic” and that its status as an official document will constrain teachers and programs from using it to experiment and innovate (75-76). He argues that sharing the various contentions that arose as the document was developed can enhance its ability to function as, in the words of Susan Leigh Star, a document of “cooperation without consensus” (qtd. in Sills 73) that does not preclude interpretations that may not align with a perceived status quo (76). Rather, in Sill’s view, revealing the different voices involved in its production permits Outcomes 3.0 to be understood as a “boundary object,” that is, an object that is

strictly defined within a particular community of practice, but loosely defined across different communities of practice. . . . [and that] allows certain terms and concepts . . . to encompass many different things. (74)

He believes that “[k]eeping policy deliberations audible” (76) will encourage instructors and programs to interpret the document’s positions flexibly as they come to see how many different approaches were brought to bear in generating the final text.

Sills invited all ten task members to participate in “discourse-based” interviews. Five agreed: Dylan Dryer, Susanmarie Harrington, Bump Halbritter, Beth Brunk-Chavez, and Kathleen Blake Yancey (60-61). Discussion focused on deliberations around the terms “composing, technology, and genre” (61; emphasis original).

Sills’s discussion of the deliberations around “composing” focus on the shift from “writing” as a key term to a less restrictive term that could encompass many different ways in which people communicate today (61). Sills indicates that the original Outcomes Statement (1.0) of 2000 made digital practices a “residual category” in comparison to traditional print-based works, while the 3.0 task force worked toward a document that endorsed both print and multimodal practices without privileging either (63).

Ideally, in the interviewees’ views, curricula in keeping with Outcomes 3.0 recognizes composing’s “complexity,” regardless of the technologies involved (65). At the same time, in Sills’s analysis, the multiplicity of practices incorporated under composing found common ground in the view, in Dryer’s words, that “we teach writing, we’re bunch of writers” (qtd. in Sills 65).

Sills states that the “ambiguity” of terms like “composing” served not only to open the door to many forms of communicative practice but also to respond to the “kairotic” demands of a document like Outcomes. 3.0. Interviewees worried that naming specific composing practices would result in guidelines that quickly fell out of date as composing options evolved (64).

According to Sills, interviews about the deliberations over genre revealed more varied attitudes than those about composing (66). In general, the responses Sills records suggest a movement away from seeing genre as fixed “static form[s]” (67) calling for a particular format toward recognizing genres as fluid, flexible, and responsive to rhetorical situations. Sills quotes Dryer’s claim that the new document depicts “students and readers and writers” as “much more agentive”; “genres change and . . . readers and writers participate in that change” (qtd. in Sills 67). Halbritter emphasizes a shift from “knowledge about” forms to a process of “experiential learning” as central to the new statement’s approach (68). For Harrington, the presentation of genre in the new document reflects attention to “habits of mind” such as rhetorical awareness and “taking responsibility for making choices” (qtd. in Sills 69).

Brunk-Chavez’s interview addresses the degree to which, in the earlier statements, technology was handled as a distinct element when genre was still equated primarily with textual forms. In the new document, whatever technology is being used is seen as integral to the genre being produced (69). Moreover, she notes that OS 3.0’s handling of genre opens it to types of writing done across disciplines (70).

She joins Yancy, however, in noting the need for the document to reflect “the consensus of the field” (72). While there was some question as to whether genre as a literary or rhetorical term should even be included in the original OS, Yancy argues that the term’s “time has come” (71). Yet the interviews capture a sense that not every practitioner in composition shares a common understanding of the term and that the document should still be applicable, for example, to instructors for whom “genre” still equates with modes (71).

In addressing this variation in the term’s function in practice, Sills notes Yancey’s desire for OS 3.0 to be a “bridging document” that does not “move too far ahead of where the discipline is,” linking scholarly exploration of genre with the many ways practitioners understand and use the term (72).

Sills considers challenges that the OS 3.0 must address if it is to serve the diverse and evolving needs of the field. Responding to concerns of scholars like Jeff Rice that the document imposes an ultimately conservative “ideology of generality” that amounts to a “rejection of the unusual” (qtd. in Sills 75), Sills acknowledges that the authority of the statement may prevent “subordinate communities of practice” like contingent faculty from “messing around with” its recommendations. But he contends that the task force’s determination to produce flexible guidelines and to foster ongoing revision can encourage “healthy resistance” to possible hegemony (76).

He further recommends specific efforts to expand participation, such as creating a Special Interest Group or a “standing institutional body” like an Outcomes Collective with rotating membership from which future task forces can be recruited on a regular timetable. Such ongoing input, he contends, can both invite diversity as teachers join the conversation more widely and assure the kairotic validity of future statements in the changing field (77-78).


Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


4 Comments

Malek and Micciche. What Can Faculty Do about Dual-Credit? WPA, Spring 2017. Posted 08/03/2017.

Malek, Joyce, and Laura R. Micciche. “A Model of Efficiency: Pre-College Credit and the State Apparatus.” Journal of the Council of Writing Program Administrators 40.2 (2017): 77-97. Print.

Joyce Malek and Laura R. Micciche discuss the prevalence and consequences of dual and concurrent enrollment initiatives in universities and colleges as well as the effects of Advance Placement (AP) exemptions. They view these arrangements as symptoms of increased “managerial” control of higher education, resulting in an emphasis on efficiency and economics at the expense of learning (79).

As faculty at the University of Cincinnati, they recount the history of various dual-enrollment programs in Ohio. The state’s Postsecondary Enrollment Options program (PSEO), which originated in 1989, as of 2007 gave students as early as 9th and 10th grades the opportunity to earn both high school and college credits (81). A 2008 program, Seniors to Sophomores (STS), initiated by then-Governor Strickland, allowed high-school seniors to “spend their senior year on a participating Ohio college or university campus,” taking “a full load” for college credit (81-82).

After a poor response to STS from students who were unable or unwilling to dispense with a senior year at their regular high school, this program was eventually included in “College Credit Plus” (CCP), in which students beginning in grade seven can earn as many as 30 college credits yearly through courses taught at their high schools by high-school teachers. At the authors’ institution, records of applying students “are assessed holistically and are reviewed against a newly developed state benchmark” that declares them, in the words of the standard, to be “remediation free in a subject” (qtd. in Malek and Micciche 82). The authors state that they were “unable to trace the history of these standards” (83); they speculate that the language arose because students enrolling in the program had proved unable to succeed at college work (82).

Malek and Micciche report that these initiatives often required commitment from writing-program faculty; for example, writing faculty at their university were instructed, along with faculty from history, Spanish, French, and math, to develop programs certifying high-school teachers to teach college coursework (83). Writing faculty were given two weeks to provide this service, with no additional funding and without the ability to design curriculum. The initiative proved to include as well a range of additional unfunded duties, such as class observations and assessment (83-84).

The authors note that funding for all such initiatives is not guaranteed, suggesting that the programs may not survive. In contrast, they note, “AP [Advanced Placement] credit is institutionalized and is here to stay” (84).

The authors see AP as a means of achieving the managerial goals of the “technobureaucrats” (84, 90) increasingly in charge of higher education. They contend that a major objective of such policy makers is the development of a system that delivers students to the university system as efficiently as possible and at the lowest cost to the consumer (78-79). The authors recognize the importance of reducing the cost of higher education—they note that in-state students earning exemption through as many as 36 AP credits can save $11,000 a year in tuition, while out-of-state students can save up to $26, 334 (84). However, in their view, these savings, when applied to writing, come at the cost of both an opportunity to fully encounter the richness of writing as a means of communication and to acquire the kind of practice that results in a confident, capable writer who will succeed in complex academic and professional environments (87).

Malek and Micciche present their experience with AP to illustrate their claim that higher education has been taken out of the hands of faculty and programs and handed over to technocrats (85), a trend that they define as “an alarming statist creep” (85). In Ohio, communicating their intentions only to “staff not positioned to object,” such as advisors, the Board of Regents lowered the AP score deemed acceptable for exemption from a 4 to a 3 (78). This change, the authors write, was “not predicated . . . on any research whatsoever” (87). Its main purpose, in the authors’ view, was to channel students as quickly as possible into Ohio institutions and to reduce students’ actual investment in college to two years (79). Efforts to network in hopes of creating  “a cross-institutional objection to the change” came to naught (78).

Malek and Micciche document the growing incursion of AP into university programs by noting its rapid growth (88). Contending that few faculty know what is involved in AP scores, the authors question the ability of the AP organization to decide in what ways scores translate into “acceptable” coursework and note that to earn a score of 3, a student need correctly answer only “a little more than 50 percent” of the multiple choice questions on the exams (86).

Malek and Micciche express concern that the low status of first-year-composition as well as its nature as a required course makes it especially vulnerable to takeover by state and managerial forces (89-90). Such takeover results in the loss of faculty positions and illustrates the “limited rhetorical power” of writing professionals, who have not succeeded in finding a voice in policy decisions and find themselves in “a reactive stance” in which they ultimately enable the managerial agenda (88-89). They find it unlikely that proposals for enhancing the status of writing studies in general will speak to the economic goals of policy makers outside of the field (90).

Similarly, they contend that “refus[ing] to participate” in the development of dual-credit initiatives will not stem the tide of such programs (92). An alternative is to become deeply involved in making sure that training for teachers in AP or dual or concurrent enrollment programs is as rich and theoretically informed as possible (92).

As a more productive means of strengthening the rhetorical agency of writing faculty, Malek and Micciche suggest “coalition-building” across a wide range of stakeholders (90). They illustrate such coalition-building with other colleges by presenting their alliance with the university’s College of Allied Health Sciences (CAHS) to design curricula to help students in CAHS courses improve as writers in their field (90-91). In their view, enlisting other disciplines in this way reinforces the importance of writing and should be seen “as a good thing” (91).

Also, noting that businesses spend “over 3 billion dollars annually to address writing deficiencies” (91), Malek and Micciche advocate for connections with local businesses, suggesting that managerial policy makers will be responsive to arguments about students’ need for “job readiness” (92).

Finally, they suggest enlisting students in efforts to lobby for the importance of college writing. They cite a study asking students to compare their AP courses with subsequent experiences in a required first-year-composition course. Results showed that the AP courses was not a substitute for the college course (93). To build this coalition with students, the authors advocate asking students about their needs and, in response, possibly imagining a “refashioned idea of FYC,” even if doing so means that “we might have to give up some of our most cherished beliefs and values and further build on our strengths” (93).


Wooten et al. SETs in Writing Classes. WPA, Fall 2016. Posted 02/11/2016.

Wooten, Courtney Adams, Brian Ray, and Jacob Babb. “WPAs Reading SETs: Toward an Ethical and Effective Use of Teaching Evaluations.” Journal of the Council of Writing Program Administrators 40.1 (2016): 50-66. Print.

Courtney Adams Wooten, Brian Ray, and Jacob Babb report on a survey examining the use of Student Evaluations of Teaching (SETs) by writing program administrators (WPAs).

According to Wooten et al., although WPAs appear to be dissatisfied with the way SETs are generally used and have often attempted to modify the form and implementation of these tools for evaluating teaching, they have done so without the benefit of a robust professional conversation on the issue (50). Noting that much of the research they found on the topic came from areas outside of writing studies (63), the authors cite a single collection on using SETs in writing programs by Amy Dayton that recommends using SETs formatively and as one of several measures to assess teaching. Beyond this source, they cite “the absence of research on SETs in our discipline” as grounds for the more extensive study they conducted (51).

The authors generated a list of WPA contact information at more than 270 institutions, ranging from two-year colleges to private and parochial schools to flagship public universities, and solicited participation via listservs and emails to WPAs (51). Sixty-two institutions responded in summer 2014 for a response rate of 23%; 90% of the responding institutions were four-year institutions.

Despite this low response rate, the authors found the data informative (52). They note that the difficulty in recruiting faculty responses from two-year colleges may have resulted from problems in identifying responsible WPAs in programs where no specific individual directed a designated writing program (52).

Their survey, which they provide, asked demographic and logistical questions to establish current practice regarding SETs at the responding institutions as well as questions intended to elicit WPAs’ attitudes toward the ways SETs affected their programs (52). Open-ended questions allowed elaboration on Likert-scale queries (52).

An important recurring theme in the responses involved the kinds of authority WPAs could assert over the type and use of SETs at their schools. Responses indicated that the degree to which WPAs could access student responses and could use them to make hiring decisions varied greatly. Although 76% of the WPAs could read SETS, a similar number indicated that department chairs and other administrators also examined the student responses (53). For example, in one case, the director of a first-year-experience program took primary charge of the evaluations (53). The authors note that WPAs are held accountable for student outcomes but, in many cases, cannot make personnel decisions affecting these outcomes (54).

Wooten et al. report other tensions revolving around WPAs’ authority over tenured and tenure-track faculty; in these cases, surveyed WPAs often noted that they could not influence either curricula nor course assignments for such faculty (54). Many WPAs saw their role as “mentoring” rather than “hiring/firing.” The WPAs were obliged to respond to requests from external authorities to deal with poor SETs (54); the authors note a “tacit assumption . . . that the WPA is not capable of interpreting SET data, only carrying out the will of the university” (54). They argue that “struggles over departmental governance and authority” deprive WPAs of the “decision-making power” necessary to do the work required of them (55).

The survey “revealed widespread dissatisfaction” about the ways in which SETs were administered and used (56). Only 13% reported implementing a form specific to writing; more commonly, writing programs used “generic” forms that asked broad questions about the teacher’s apparent preparation, use of materials, and expertise (56). The authors contend that these “indirect” measures do not ask about practices specific to writing and may elicit negative comments from students who do not understand what kinds of activities writing professionals consider most beneficial (56).

Other issues of concern include the use of online evaluations, which provide data that can be easily analyzed but result in lower participation rates (57). Moreover, the authors note, WPAs often distrust numerical data without the context provided by narrative responses, to which they may or may not have access (58).

Respondents also noted confusion or uncertainty about how an institution determines what constitutes a “good” or “poor” score. Many of these decisions are determined by comparing an individual teacher’s score to a departmental or university-wide average, with scores below the average signaling the need for intervention. The authors found evidence that even WPAs may fail to recognize that lower scores can be influenced not just by the grade the student expects but also by gender, ethnicity, and age, as well as whether the course is required (58-59).

Wooten et al. distinguish between “teaching effectiveness,” a basic measure of competence, and “teaching excellence,” practices and outcomes that can serve as benchmarks for other educators (60). They note that at many institutions, SETs appear to have little influence over recognition of excellence, for example through awards or commendations; classroom observations and teaching portfolios appear to be used more often for these determinations. SETs, in contrast, appear to have a more “punitive” function (61), used more often to single out teachers who purportedly fall short in effectiveness (60).

The authors note the vulnerability of contingent and non-tenure-track faculty to poorly implemented SETs and argue that a climate of fear occasioned by such practices can lead to “lenient grading and lowered demands” (61). They urge WPAs to consider the ethical implications of the use of SETs in their institutions.

Recommendations include “ensuring high response rates” through procedures and incentives; clarifying and standardizing designations of good and poor performance and ensuring transparency in the procedures for addressing low scores; and developing forms specific to local conditions and programs (61-62). Several of the recommendations concern increasing WPA authority over hiring and mentoring teachers, including tenure-track and tenured faculty. Wooten et al. recommend that all teachers assigned to writing courses administer writing-specific evaluations and be required to act on the information these forms provide; the annual-report process can allow tenured faculty to demonstrate their responsiveness (62).

The authors hope that these recommendations will lead to a ‘disciplinary discussion” among WPAs that will guide “the creation of locally appropriate evaluation forms that balance the needs of all stakeholders—students, teachers, and administrators” (63).


1 Comment

West-Puckett, Stephanie. Digital Badging as Participatory Assessment. CE, Nov. 2016. Posted 11/17/2016.

Stephanie West-Puckett presents a case study of the use of “digital badges” to create a local, contextualized, and participatory assessment process that works toward social justice in the writing classroom.

She notes that digital badges are graphic versions of those earned by scouts or worn by members of military groups to signal “achievement, experience, or affiliation in particular communities” (130). Her project, begun in Fall 2014, grew out of Mozilla’s free Open Badging Initiative and the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC) that funded grants to four universities as well as to museums, libraries, and community partnerships to develop badging as a way of recognizing learning (131).

West-Puckett employed badges as a way of encouraging and assessing student engagement in the outcomes and habits of mind included in such documents as the Framework for Success in Postsecondary Writing, the Outcomes Statements for First-Year Composition produced by the Council of Writing Program Administrators, and her own institution’s outcomes statement (137). Her primary goal is to foster a “participatory” process that foregrounds the agency of teachers and students and recognizes the ways in which assessment can influence classroom practice. She argues that such participation in designing and interpreting assessments can address the degree to which assessment can drive bias and limit access and agency for specific groups of learners (129).

She reviews composition scholarship characterizing most assessments as “top-down” (127-28). In these practices, West-Puckett argues, instruments such as rubrics become “fetishized,” with the result that they are forced upon contexts to which they are not relevant, thus constraining the kinds of assignments and outcomes teachers can promote (134). Moreover, assessments often fail to encourage students to explore a range of literacies and do not acknowledge learners’ achievements within those literacies (130). More valid, for West-Puckett, are “hyperlocal” assessments designed to help teachers understand how students are responding to specific learning opportunities (134). Allowing students to join in designing and implementing assessments makes the learning goals visible and shared while limiting the power of assessment tools to marginalize particular literacies and populations (128).

West-Puckett contends that the multimodal focus in writing instruction exacerbates the need for new modes of assessment. She argues that digital badges partake of “the primacy of visual modes of communication,” especially for populations “whose bodies were not invited into the inner sanctum of a numerical and linguistic academy” (132). Her use of badges contributes to a form of assessment that is designed not to deride writing that does not meet the “ideal text” of an authority but rather to enlist students’ interests and values in “a dialogic engagement about what matters in writing” (133).

West-Puckett argues for pairing digital badging with “critical validity inquiry,” in which the impact of an assessment process is examined through a range of theoretical frames, such as feminism, Marxism, or queer or disability theory (134). This inquiry reveals assessment’s role in sustaining or potentially disrupting entrenched views of what constitutes acceptable writing by examining how such views confer power on particular practices (134-35).

In West-Puckett’s classroom in a “mid-size, rural university in the south” with a high percentage of students of color and first-generation college students (135), small groups of students chose outcomes from the various outcomes statements, developed “visual symbols” for the badges, created a description of the components and value of the outcomes for writing, and detailed the “evidence” that applicants could present from a range of literacy practices to earn the badges (137). West-Puckett hoped that this process would decrease the “disconnect” between her understanding of the outcomes and that of students (136), as well as engage students in a process that takes into account the “lived consequences of assessment” (141): its disparate impact on specific groups.

The case study examines several examples of badges, such as one using a compass to represent “rhetorical knowledge” (138). The group generated multimodal presentations, and applicants could present evidence in a range of forms, including work done outside of the classroom (138-39). The students in the group decided whether or not to award the badge.

West-Puckett details the degree to which the process invited “lively discussion” by examining the “Editing MVP” badge (139). Students defined editing as proofreading and correcting one’s own paper but visually depicted two people working together. The group refused the badge to a student of color because of grammatical errors but awarded it to another student who argued for the value of using non-standard dialogue to show people “‘speaking real’ to each other” (qtd. in West-Puckett 140). West-Puckett recounts the classroom discussion of whether editing could be a collaborative effort and when and in what contexts correctness matters (140).

In Fall 2015, West-Puckett implemented “Digital Badging 2.0” in response to her concerns about “the limited construct of good writing some students clung to” as well as how to develop “badging economies that asserted [her] own expertise as a writing instructor while honoring the experiences, viewpoints, and subject positions of student writers” (142). She created two kinds of badging activities, one carried out by students as before, the other for her own assessment purposes. Students had to earn all the student-generated badges in order to pass, and a given number of West-Puckett’s “Project Badges” to earn particular grades (143). She states that she privileges “engagement as opposed to competency or mastery” (143). She maintains that this dual process, in which her decision-making process is shared with the students who are simultaneously grappling with the concepts, invites dialogue while allowing her to consider a wide range of rhetorical contexts and literacy practices over time (144).

West-Puckett reports that although she found evidence that the badging component did provide students an opportunity to take more control of their learning, as a whole the classes did not “enjoy” badging (145). They expressed concern about the extra work, the lack of traditional grades, and the responsibility involved in meeting the project’s demands (145). However, in disaggregated responses, students of color and lower-income students viewed the badge component favorably (145). According to West-Puckett, other scholars have similarly found that students in these groups value “alternative assessment models” (146).

West-Puckett lays out seven principles that she believes should guide participatory assessment, foregrounding the importance of making the processes “open and accessible to learners” in ways that “allow learners to accept or refuse particular identities that are constructed through the assessment” (147). In addition, “[a]ssessment artifacts,” in this case badges, should be “portable” so that students can use them beyond the classroom to demonstrate learning (148). She presents badges as an assessment tool that can embody these principles.


1 Comment

Moxley and Eubanks. Comparing Peer Review and Instructor Ratings. WPA, Spring 2016. Posted 08/13/2016.

Moxley, Joseph M., and David Eubanks. “On Keeping Score: Instructors’ vs. Students’ Rubric Ratings of 46,689 Essays.” Journal of the Council of Writing Program Administrators 39.2 (2016): 53-80. Print.

Joseph M. Moxley and David Eubanks report on a study of their peer-review process in their two-course first-year-writing sequence. The study, involving 16,312 instructor evaluations and 30,377 student reviews of “intermediate drafts,” compared instructor responses to student rankings on a “numeric version” of a “community rubric” using a software package, My Reviewers, that allowed for discursive comments but also, in the numeric version, required rubric traits to be assessed on a five-point scale (59-61).

Exploring the literature on peer review, Moxley and Eubanks note that most such studies are hindered by small sample sizes (54). They note a dearth of “quantitative, replicable, aggregated data-driven (RAD) research” (53), finding only five such studies that examine more than 200 students (56-57), with most empirical work on peer review occurring outside of the writing-studies community (55-56).

Questions investigated in this large-scale empirical study involved determining whether peer review was a “worthwhile” practice for writing instruction (53). More specific questions addressed whether or not student rankings correlated with those of instructors, whether these correlations improved over time, and whether the research would suggest productive changes to the process currently in place (55).

The study took place at a large research university where the composition faculty, consisting primarily of graduate students, practiced a range of options in their use of the My Reviewers program. For example, although all commented on intermediate drafts, some graded the peer reviews, some discussed peer reviews in class despite the anonymity of the online process, and some included training in the peer-review process in their curriculum, while others did not.

Similarly, the My Reviewers package offered options including comments, endnotes, and links to a bank of outside sources, exercises, and videos; some instructors and students used these resources while others did not (59). Although the writing program administration does not impose specific practices, the program provides multiple resources as well as a required practicum and annual orientation to assist instructors in designing their use of peer review (58-59).

The rubric studied covered five categories: Focus, Evidence, Organization, Style, and Format. Focus, Organization, and Style were broken down into the subcategories of Basics—”language conventions”—and Critical Thinking—”global rhetorical concerns.” The Evidence category also included the subcategory Critical Thinking, while Format encompassed Basics (59). For the first year and a half of the three-year study, instructors could opt for the “discuss” version of the rubric, though the numeric version tended to be preferred (61).

The authors note that students and instructors provided many comments and other “lexical” items, but that their study did not address these components. In addition, the study did not compare students based on demographic features, and, due to its “observational” nature, did not posit causal relationships (61).

A major finding was that. while there was some “low to modest” correlation between the two sets of scores (64), students generally scored the essays more positively than instructors; this difference was statistically significant when the researchers looked at individual traits (61, 67). Differences between the two sets of scores were especially evident on the first project in the first course; correlation did increase over time. The researchers propose that students learned “to better conform to rating norms” after their first peer-review experience (64).

The authors discovered that peer reviewers were easily able to distinguish between very high-scoring papers and very weak ones, but struggled to make distinctions between papers in the B/C range. Moxley and Eubanks suggest that the ability to distinguish levels of performance is a marker for “metacognitive skill” and note that struggles in making such distinctions for higher-quality papers may be commensurate with the students’ overall developmental levels (66).

These results lead the authors to consider whether “using the rubric as a teaching tool” and focusing on specific sections of the rubric might help students more closely conform to the ratings of instructors. They express concern that the inability of weaker students to distinguish between higher scoring papers might “do more harm than good” when they attempt to assess more proficient work (66).

Analysis of scores for specific rubric traits indicated to the authors that students’ ratings differed more from those of instructors on complex traits (67). Closer examination of the large sample also revealed that students whose teachers gave their own work high scores produced scores that more closely correlated with the instructors’ scores. These students also demonstrated more variance than did weaker students in the scores they assigned (68).

Examination of the correlations led to the observation that all of the scores for both groups were positively correlated with each other: papers with higher scores on one trait, for example, had higher scores across all traits (69). Thus, the traits were not being assessed independently (69-70). The authors propose that reviewers “are influenced by a holistic or average sense of the quality of the work and assign the eight individual ratings informed by that impression” (70).

If so, the authors suggest, isolating individual traits may not necessarily provide more information than a single holistic score. They posit that holistic scoring might not only facilitate assessment of inter-rater reliability but also free raters to address a wider range of features than are usually included in a rubric (70).

Moxley and Eubanks conclude that the study produced “mixed results” on the efficacy of their peer-review process (71). Students’ improvement with practice and the correlation between instructor scores and those of stronger students suggested that the process had some benefit, especially for stronger students. Students’ difficulty with the B/C distinction and the low variance in weaker students’ scoring raised concerns (71). The authors argue, however, that there is no indication that weaker students do not benefit from the process (72).

The authors detail changes to their rubric resulting from their findings, such as creating separate rubrics for each project and allowing instructors to “customize” their instruments (73). They plan to examine the comments and other discursive components in their large sample, and urge that future research create a “richer picture of peer review processes” by considering not only comments but also the effects of demographics across many settings, including in fields other than English (73, 75). They acknowledge the degree to which assigning scores to student writing “reifies grading” and opens the door to many other criticisms, but contend that because “society keeps score,” the optimal response is to continue to improve peer-review so that it benefits the widest range of students (73-74).


1 Comment

Boyle, Casey. Rhetoric and/as Posthuman Practice. CE, July 2016. Posted 08/06/2016.

Boyle, Casey. “Writing and Rhetoric and/as Posthuman Practice.” College English 78.6 (2016): 532-54. Print.

Casey Boyle examines the Framework for Success in Postsecondary Writing, issued by the Council of Writing Program Administrators, the National Council of Teachers of English, and the National Writing Project, in light of its recommendation that writing instruction encourage the development of “habits of mind” that result in enhanced learning.

Boyle focuses especially on the Framework‘s attention to “metacognition,” which he finds to be largely related to “reflection” (533). In Boyle’s view, when writing studies locates reflection at the center of writing pedagogy, as he argues it does, the field endorses a set of “bad habits” that he relates to a humanist mindset (533). Boyle proposes instead a view of writing and writing pedagogy that is “ecological” and “posthuman” (538). Taking up Kristine Johnson’s claim that the Framework opens the door to a revitalization of “ancient rhetorical training.” Boyle challenges the equation of such training with a central mission of social and political critique (534).

Boyle recounts a history of writing pedagogy beginning with “current-traditional rhetoric” as described by Sharon Crowley and others as the repetitive practice of form (535). Rejection of this pedagogy resulted in a shift toward rhetorical and writing education as a means of engaging students with their social and political surroundings. Boyle terms this focus “current-critical rhetoric” (536). Its primary aim, he argues, is to increase an individual’s agency in that person’s dealings with his or her cultural milieu, enhancing the individual’s role as a citizen in a democratic polity (536).

Boyle critiques current-critical rhetoric, both in its approach to the self and in its insistence on the importance of reflection as a route to critical awareness, for its determination to value the individual’s agency over the object, which is viewed as separate from the acting self (547). Boyle cites Peter Sloterdijk’s view that the humanist sense of a writing self manifests itself in the “epistle or the letter to a friend” that demonstrates the existence of a coherent identity represented by the text (537). Boyle further locates a humanist approach in the “reflective letter assignments” that ask students to demonstrate their individual agency in choosing among many options as they engage in rhetorical situations (537).

To develop the concept of the “ecological orientation” (538) that is consistent with a posthumanist mindset, Boyle explores a range of iterations of posthumanism, which he stresses is not be understood as “after the human” (539). Rather, quoting N. Katherine Hayles, Boyle characterizes posthumanism as “the end of a certain conception of the human” (qtd. in Boyle 539). Central posthumanism is the idea of human practices as one component of a “mangled assemblage” of interactions among both human and nonhuman entities (541) in which separation of subject and object become impossible. In this view, “rhetorical training” would become “an orchestration of ecological relations” (539), in which practices within a complex of technologies and environments, some of them not consciously summoned, would emerge from the relations and shape future practices and relations.

Boyle characterizes this understanding of practice as a relation of “betweenness among what was previously considered the human and the nonhuman” (540; emphasis in original). He applies Andrew Pickering’s metaphor of practice as a “reciprocal tuning of people and things” (541). In such an orientation, “[t]heory is a practice” that “is continuous with and not separate from the mediation of material ecologies” (542). Practice becomes an “ongoing tuning” (542) that functions as a “way of becoming” (Robert Yagelski, qtd. in Boyle 538; emphasis in original).

In Boyle’s view, the Framework points toward this ecological orientation in stressing the habit of “openness” to “new ways of being” (qtd. in Boyle 541). In addition, the Framework envisions students “writing in multiple environments” (543; emphasis in Boyle). Seen in a posthuman light, such multiple exposures redirect writers from the development of critical awareness to, in Pickering’s formulation, knowledge understood as a “sensitivity” to the interactions of ecological components in which actors both human and nonhuman are reciprocally generative of new forms and understandings (542). Quoting Isabelle Stengers, Boyle argues that “an ecology of practices does not have any ambition to describe things ‘as they are’ . . . but as they may become” (qtd. in Boyle 541).

In Boyle’s formulation, agency becomes “capacity,” which is developed through repeated practice that then “accumulates prior experience” to construct a “database of experience” that establishes the habits we draw on to engage productively with future environments (545). Such an accumulation comes to encompass, in the words of Collin Brooke, “all of the ‘available means'” (qtd. in Boyle 549), not all of them visible to conscious reflection, (544) through which we can affect and be affected by ongoing relations in rhetorical situations.

Boyle embodies such practice in the figure of the archivist “whose chief task is to generate an abundance of relations” rather than that of the letter writer (550), thus expanding options for being in the world. Boyle emphasizes that the use of practice in this way is “serial” in that each reiteration is both “continuous” and “distinct,” with the components of the series “a part of, but also apart from, any linear logic that might be imposed” (547): “Practice is the repetitive production of difference” (547). Practice also becomes an ethics that does not seek to impose moral strictures (548) but rather to enlarge and enable “perception” and “sensitivities” (546) that coalesce, in the words of Rosi Braidotti, in a “pragmatic task of self-transformation through humble experimentation” (qtd. in Boyle 539).

Boyle connects these endeavors to rhetoric’s historical allegiance to repetition through sharing “common notions” (Giles Deleuze, qtd. in Boyle 550). Persuasion, he writes, “occurs . . . not as much through rational appeals to claims but through an exercise of material and discursive forms” (550), that is, through relations enlarged by habits of practice.

Related to this departure from conscious rational analysis is Boyle’s proposed posthuman recuperation of “metacognition,” which he states has generally been perceived to involve analysis from a “distance or remove from an object to which one looks” (551). In Boyle’s view, metacognition can be understood more productively through a secondary meaning that connotes “after” and “among” (551). Similarly, rhetoric operates not in the particular perception arising from a situated moments but “in between” the individual moment and the sensitivities acquired from experience in a broader context (550; emphasis original):

[R]hetoric, by attending more closely to practice and its nonconscious and nonreflective activity, reframes itself by considering its operations as exercises within a more expansive body of relations than can be reduced to any individual human. (552).

Such a sensibility, for Boyle, should refigure writing instruction, transforming it into “a practice that enacts a self” (537) in an ecological relation to that self’s world.

 


Obermark et al. New TA Development Model. WPA, Fall 2015. Posted 02/08/2016.

Obermark, Lauren, Elizabeth Brewer, and Kay Halasek. “Moving from the One and Done to a Culture of Collaboration: Revising Professional Development for TAs.” Journal of the Council of Writing Program Administrators 39.1 (2015): 32-53. Print.

Lauren Obermark, Elizabeth Brewer, and Kay Halasek detail a professional development model for graduate teaching assistants (TAs) that was established at their institution to better meet the needs of both beginning and continuing TAs. Their model responded to the call from E. Shelley Reid, Heidi Estrem, and Marcia Belcheir to “[g]o gather data—not just impressions—from your own TAs” in order to understand and foreground local conditions (qtd. in Obermark et al. 33).

To examine and revise their professional development process beginning in 2011 and continuing through 2013, Obermark et al. conducted a survey of current TAs, held focus groups, and surveyed “alumni” TAs to determine TAs’ needs and their reactions to the support provided by the program (35-36).

An exigency for Obermark et al. was the tendency they found in the literature to concentrate TA training on the first semester of teaching. They cite Beth Brunk-Chavez to note that this tendency gives short shrift to the continuing concerns and professional growth of TAs as they advance from their early experiences in first-year writing to more complex teaching assignments (33). As a result of their research, Obermark et al. advocate for professional development that is “collaborative,” “ongoing,” and “distributed across departmental and institutional locations” (34).

The TA program in place at the authors’ institution prior to the assessment included a week-long orientation, a semester’s teaching practicum, a WPA class observation, and a syllabus built around a required textbook (34). After their first-year, TAs were able to move on to other classes, particularly the advanced writing class, which fulfills a general education requirement across the university and is expected to provide a more challenging writing experience, including a “scaffolded research project” (35). Obermark et al. found that while students with broader teaching backgrounds were often comfortable with designing their own syllabus to meet more complex pedagogical requirements, many TAs who had moved from the well-supported first-year course to the second wished for more guidance than they had received (35).

Consulting further scholarship by Estrem and Reid led Obermark et al. to act on “a common error” in professional development: failing to conduct a “needs assessment” by directly asking questions designed to determine, in the words of Kathleen Blake Yancey, “the characteristics of the TAs for whom the program is designed” (qtd. in Obermark et al. 36-37). The use of interview methodology through focus groups not only instilled a collaborative ethos, it also permitted the authors to plan “developmentally appropriate PD” and provided TAs with what the authors see as a rare opportunity to reflect on their experiences as teachers. Obermark et al. stress that this fresh focus on what Cynthia Selfe and Gail Hawisher call a “participatory model of research” (37) allowed the researchers to demonstrate their perceptions of the TAs as professional colleagues, leading the TAs themselves “to identify more readily as professionals” (37).

TAs’ sense of themselves as professionals was further strengthened by the provision of “ongoing” support to move beyond what Obermark et al. call “the one and done” model (39). Through the university teaching center, they encountered Jody Nyquist and Jo Sprague’s theory of three stages of TA development: “senior learners” who “still identify strongly with students”; “colleagues in training” who have begun to recognize themselves as teachers; and “junior colleagues” who have assimilated their professional identities to the point that they “may lack only the formal credentials” (qtd. in Obermark et al. 39). Obermark et al. note that their surveys revealed, as Nyquist and Sprague predicted, that their population comprised TAs at all three levels as they moved through these stages at different rates (39-40).

The researchers learned that even experienced TAs still often had what might have been considered basic questions about the goals of the more advanced course and how to integrate the writing process into the course’s general education outcomes (40). The research revealed that as TAs moved past what Nyquist and Sprague denoted the “survival” mode that tends to characterize a first year of teaching, they began to recognize the value of composition theory and became more invested in applying theory to their teaching (39). That 75% of the alumni surveyed were teaching writing in their institutions regardless of their actual departmental positions reinforced the researchers’ certainty and the TAs’ awareness that composition theory and practice would be central to their ongoing academic careers (40).

Refinements included a more extensive schedule of optional workshops and a “peer-to-peer” program that responded to TA requests for more opportunities to observe and interact with each other. Participating TAs received guidance on effective observation processes and feedback; subsequent expansion of this program offered TAs opportunities to share designing assigning assignments and grading as well (42).

The final component of the new professional-development model focused on expanding the process of TA support across both the English department and the wider university. Obermark et al. indicate that many of the concerns expressed by TAs addressed not just teaching writing with a composition-studies emphasis but also teaching more broadly in areas that “did not fall neatly under our domain as WPAs and specialists in rhetoric and composition” (43). For example, TAs asked for more guidance in working with students’ varied learning styles and, in particular, in meeting the requirement for “social diversity” expressed in the general-education outcomes for the more advance course (44). Some alumni TAs reported wishing for more help teaching in other areas within English, such as in literature courses (45).

The authors designed programs featuring faculty and specialists in different pedagogical areas, such as diversity, as well as workshops and break-outs in which TAs could explore kinds of teaching that would apply across the many different environments in which they found themselves as professionals (45). Obermark et al. note especially the relationship they established with the university teaching center, a collaboration that allowed them to integrate expertise in composition with other philosophies of teaching and that provided “allies in both collecting data and administering workshops for which we needed additional expertise” (45). Two other specific benefits from this partnership were the enhanced “institutional memory” that resulted from inclusion of a wider range of faculty and staff and increased sustainability for the program as a larger university population became invested in the effort (45-46).

Obermark et al. provide their surveys and focus-group questions, urging other WPAs to engage TAs in their own development and to relate to them “as colleagues in the field rather than novices in need of training, inoculation, or the one and done approach” (47).