College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Nazzal et al. Curriculum for Targeted Instruction at a Community College. TETYC, Mar. 2020. Posted 06/11/2020.

Nazzal, Jane S., Carol Booth Olson, and Huy Q. Chung. “Differences in Academic Writing across Four Levels of Community College Composition Courses.” Teaching English in the Two-Year College 47.3 (2020): 263-96. Print.

Jane S. Nazzal, Carol Booth Olson, and Huy Q. Chung present an assessment tool to help writing educators design curriculum during a shift from faculty-scored placement exams and developmental or “precollegiate” college courses (263) to what they see as common reform options (264-65, 272).

These options, they write, often include directed self-placement (DSP), while preliminary courses designed for students who might struggle with “transfer-level” courses are often replaced with two college-level courses, one with an a concurrent support addition for students who feel they need extra help, and one without (265). At the authors’ institution, “a large urban community college in California” with an enrollment of 50,000 that is largely Hispanic and Asian, faculty-scored exams placed 15% of the students into the transfer-level course; after the implementation of DSP, 73% chose the transfer course, 12% the course with support, and the remaining 15% the precollegiate courses (272).

The transition to DSP and away from precollegiate options, according to Nazzal et al., resulted from a shift away from “access” afforded by curricula intended to help underprepared students toward widespread emphasis on persistence and time to completion (263). The authors cite scholarship contending that processes that placed students according to faculty-scored assessments incorrectly placed one-third to one-half of students and disparately affected minority students; fewer than half of students placed into precollegiate courses reach the transfer-level course (264).

In the authors’ view, the shift to DSP as a solution for these problems creates its own challenges. They contend that valuable information about student writing disappears when faculty no longer participate in placement processes (264). Moreover, they question the reliability of high-school grades for student decisions, arguing that high school curriculum is often short on writing (265). They cite “burden-shifting” when the responsibility for making good choices is passed to students who may have incomplete information and little experience with college work (266). Noting as well that lower income students may opt for the unsupported transfer course because of the time pressure of their work and home lives, the authors see a need for research on how to address the specific situations of students who opt out of support they may need (266-67).

The study implemented by Nazzal et al. attempts to identify these specific areas that affect student success in college writing in order to facilitate “explicit teaching” and “targeted instruction” (267). They believe that their process identifies features of successful writing that are largely missing from the work of inexperienced writers but that can be taught (268).

The authors review cognitive research on the differences between experienced and novice writers, identifying areas like “Writing Objectives,” “Revision,” and “Sense of Audience” (269-70). They present “[f]oundational [r]esearch” that compares the “writer-based prose” of inexpert writers with the “reader-based prose” of experts (271), as well as the whole-essay conceptualization of successful writers versus the piecemeal approach of novices, among other differentiating features (269).

The study was implemented during the first two weeks of class over two semesters, with eight participating faculty teaching thirteen sections. Two hundred twenty-five students from three precollegiate levels and the single transfer-level course completed the tasks. The study essays were similar to the standard college placement essays taken by most of the students in that they were timed responses to prompts, but for the study, students were asked to read two pieces and “interpret, and synthesize” them in their responses (272-73). One piece was a biographical excerpt (Harriet Tubman or Louie Zamperini, war hero) and the other a “shorter, nonfiction article outlining particular character qualities or traits,” one discussing leadership and the other resilience (274). The prompts asked students to choose a single trait exhibited by the subject that most contributed to his or her success (274).

In the first of two 45-minute sessions, teachers read the pieces aloud while students followed along, then gave preliminary guidance using a graphical organizer. In the second session, students wrote their essays. The essays were rated by experienced writing instructors trained in scoring, using criteria for “high-school writing competency” based on principles established by mainstream composition assessment models (273-74).

Using “several passes through the data,” the lead researcher examined a subset of 76 papers that covered the full range of scores in order to identify features that were “compared in frequency across levels.” Differences in the frequency of these features were analyzed for statistical significance across the four levels (275). A subsample of 18 high-scoring papers was subsequently analyzed for “distinguishing elements . . . that were not present in lower-scoring papers,” including some features that had not been previously identified (275).

Nine features were compared across the four levels; the authors provide examples of presence versus absence of these features (276-79). Three features differed significantly in their frequency in the transfer-level course versus the precollegiate courses: including a clear claim, responding to the specific directions of the prompt, and referring to the texts (279).

Nazzal et al. also discovered that a quarter of the students placed in the transfer-level course failed to refer to the text, and that only half the students in that course earning passing scores, indicating that they had not incorporated one or more of the important features. They concluded that students at all levels would benefit from a curriculum targeting these moves (281).

Writing that only 9% of the papers scored in the “high” range of 9-12 points, Nazzal et al. present an annotated example of a paper that includes components that “went above and beyond the features that were listed” (281). Four distinctive features of these papers were

(1) a clear claim that is threaded throughout the paper; (2) a claim that is supported by relevant evidence and substantiated with commentary that discusses the significance of the evidence; (3) a conclusion that ties back to the introduction; and (4) a response to all elements of the prompt. (282)

Providing appendices to document their process, Nazzal et al. offer recommendations for specific “writing moves that establish communicative clarity in an academic context” (285). They contend that it is possible to identify and teach the moves necessary for students to succeed in college writing. In their view, their identification of differences in the writing of students entering college with different levels of proficiency suggests specific candidates for the kind of targeted instruction that can help all students succeed.


Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


Pruchnic et al. Mixed Methods in Direct Assessment. J or Writ Assessment, 2018. Posted 12/01/2018.

Pruchnic, Jeff, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton. “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing.” Journal of Writing Assessment 11.1 (2018). Web. 27 Nov. 2018.

[Page numbers from pdf generated from the print dialogue]

Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tanina Foster, and Ellen Barton report on an assessment of “reflection argument essay[s]” from the first-year-composition population of a large, urban, public research university (6). Their assessment used “mixed methods,” including a “thin-slice” approach (1). The authors suggest that this method can address difficulties faced by many writing programs in implementing effective assessments.

The authors note that many stakeholders to whom writing programs must report value large-scale quantitative assessments (1). They write that the validity of such assessments is often measured in terms of statistically determined interrater reliability (IRR) and samples considered large enough to adequately represent the population (1).

Administrators and faculty of writing programs often find that implementing this model requires time and resources that may not be readily available, even for smaller programs. Critics of this model note that one of its requirements, high interrater reliability, can too easily come to stand in for validity (2); in the view of Peter Elbow, such assessments favor “scoring” over “discussion” of the results (3). Moreover, according to the authors, critics point to the “problematic decontextualization of program goals and student achievement” that large-scale assessments can foster (1).

In contrast, Pruchnic et al. report, writing programs have tended to value the “qualitative assessment of a smaller sample size” because such models more likely produce the information needed for “the kinds of curricular changes that will improve instruction” (1). Writing programs, the authors maintain, have turned to redefining a valid process as one that can provide this kind of information (3).

Pruchnic et al. write that this resistance to statistically sanctioned assessments has created a bind for writing programs. Pruchnic et al. cite scholars like Peggy O’Neill (2) and Richard Haswell (3) to posit that when writing programs refuse the measures of validity required by external stakeholders, they risk having their conclusions dismissed and may well find themselves subject to outside intervention (3). Haswell’s article “Fighting Number with Number” proposes producing quantitative data as a rhetorical defense against external criticism (3).

In the view of the authors, writing programs are still faced with “sustainability” concerns:

The more time one spends attempting to perform quantitative assessment at the size and scope that would satisfy statistical reliability and validity, the less time . . . one would have to spend determining and implementing the curricular practices that would support the learning that instructors truly value. (4)

Hoping to address this bind, Pruchnic et al. write of turning to a method developed in social studies to analyze “lengthy face-to-face social and institutional interactions” (5). In a “thin-slice” methodology, raters use a common rubric to score small segments of the longer event. The authors report that raters using this method were able to predict outcomes, such as the number of surgery malpractice claims or teacher-evaluation results, as accurately as those scoring the entire data set (5).

To test this method, Pruchnic et al. created two teams, a “Regular” and a “Research” team. The study compared interrater reliability, “correlation of scores,” and the time involved to determine how closely the Research raters, scoring thin slices of the assessment data, matched the work of the Regular raters (5).

Pruchnic et al. provide a detailed description of their institution and writing program (6). The university’s assessment approach is based on Edward White’s “Phase 2 assessment model,” which involves portfolios with a final reflective essay, the prompt for which asks students to write an evidence-based argument about their achievements in relation to the course outcomes (8). The authors note that limited resources gradually reduced the amount of student writing that was actually read, as raters moved from full-fledged portfolio grading to reading only the final essay (7). The challenges of assessing even this limited amount of student work led to a sample that consisted of only 6-12% of the course enrollment.

The authors contend that this is not a representative sample; as a result, “we were making decisions about curricular and other matters that were not based upon a solid understanding of the writing of our entire student body” (7). The assessment, in the authors’ view, therefore did not meet necessary standards of reliability and validity.

The authors describe developing the rubric to be used by both the Research and Regular teams from the precise prompt for the essay (8). They used a “sampling calculator” to determine that, given the total of 1,174 essays submitted, 290 papers would constitute a representative sample; instructors were asked for specific, randomly selected papers to create a sample of 291 essays (7-8).

The Regular team worked in two-member pairs, both members of each pair reading the entire essay, with third readers called in as needed (8): “[E]ach essay was read and scored by only one two-member team” (9). The authors used “double coding” in which one-fifth of the essays were read by a second team to establish IRR (9). In contrast, the 10-member Research team was divided into two groups, each of which scored half the essays. These readers were given material from “the beginning, middle, and end” of each essay: the first paragraph, the final paragraph, and a paragraph selected from the middle page or pages of the essay, depending on its length. Raters scored the slices individually; the averaged five team members’ scores constituted the final scores for each paper (9).

Pruchnic et al. discuss in detail their process for determining reliability and for correlating the scores given by the Regular and Research teams to determine whether the two groups were scoring similarly. Analysis of interrater reliability revealed that the Research Team’s IRR was “one full classification higher” than that of the Regular readers (12). Scores correlated at the “low positive” level, but the correlation was statistically significant (13). Finally, the Research team as a whole spent “a little more than half the time” scoring than the Regular group, while individual average scoring times for Research team members was less than half of the scoring time of the Regular members (13).

Additionally, the assessment included holistic readings of 16 essays randomly representing the four quantitative result classifications of Poor through Good (11). This assessment allowed the authors to determine the qualities characterizing essays ranked at different levels and to address the pedagogical implications within their program (15, 16).

The authors conclude that thin-slice scoring, while not always the best choice in every context (16), “can be added to the Writing Studies toolkit for large-scale direct assessment of evaluative reflective writing” (14). Future research, they propose, should address the use of this method to assess other writing outcomes (17). Paired with a qualitative assessment, they argue, a mixed-method approach that includes thin-slice analysis as an option can help satisfy the need for statistically grounded data in administrative and public settings (16) while enabling strong curricular development, ideally resulting in “the best of both worlds” (18).


Ray et al. Rethinking Student Evaluations of Teaching. Comp Studies Spring 2018. Posted 08/25/2018.

Ray, Brian, Jacob Babb, and Courtney Adams Wooten. “Rethinking SETs: Retuning Student Evaluations of Teaching for Student Agency.” Composition Studies 46.1 (2018): 34-56. Web. 10 Aug. 2018.

Brian Ray, Jacob Babb, and Courtney Adams Wooten report a study of Student Evaluations of Teaching (SETs) across a range of institutions. The researchers collected 55 different forms, 45 of which were institutions’ generic forms, while 10 were designed specifically for writing classes. They coded 1,108 different questions from these forms in order to determine what kinds of questions were being asked (35).

The authors write that although SETs and their use, especially in personnel decisions, is of concern in rhetoric and composition, very little scholarship in the field has addressed the issue (34-35). They summarize a history of student evaluations as tools for assessment of teachers, beginning with materials from the 1920s. Early SETs focused heavily on features of personality such as “wit,” “tact,” and “popularity” (38), as well as physical appearance (39). This focus on “subjective” characteristics of teachers asked students to judge “factors that neither they nor the instructor had sole control over and that they could do little to affect” (38).

This emphasis persisted throughout twentieth century. A scholar named Herbert Marsh conducted “numerous studies” in the 1970s and 1980s and eventually created the Student Evaluation of Education Quality form (SEEQ) in 1987 (35). This instrument asked students about nine features:

[L]earning, enthusiasm, organization and clarity, group interaction, individual rapport, breadth of coverage, tests and grading, assignments, and difficulty (39)

The authors contend that these nine factors substantively guide the SETs they studied (35), and they claim that, in fact, in important ways, “current SET forms differ little from those seen in the 1920s” (40).

Some of composition’s “only published conversations about SETs” revolved around workshops conducted by the Conference on College Composition and Communication (CCCC) from 1956 through 1962 (39). The authors report that instructors participating in these discussions saw the forms as most appropriate for “formative” purposes; very few institutions used them in personnel matters (39).

Data from studies of SETs in other fields reveal some of the problems that can result from common versions of these measures (37). The authors state that studies over the last ten years have not been able to link high teacher ratings on SETs with improved student learning or performance (40). Studies point out that many of the most common categories, like “clarity and fairness,” remain subjective, and that students consistently rank instructors on personality rather than on more valid measures of effectiveness (41).

Such research documents bias related to gender and ethnicity, with female African-American teachers rated lowest on one study asking students to assess “a hypothetical curriculum vitae according to teaching qualifications and expertise” (42). Male instructors are more commonly praised for their “ability to innovate and stimulate critical thought”; women are downgraded for failing to be “compassionate and polite” (42). Studies showed that elements like class size and workload affected results (42). Physical attractiveness continues to influence student opinion, as does the presence of “any kind of reward,” like lenient grading or even supplying candy (43).

The authors emphasize their finding that a large percentage of the questions they examined asked students about either some aspect of the teacher’s behavior (e.g., “approachability,” “open-mindedness” [42]) or what the teacher did (“stimulated my critical thinking” [45]). The teacher was the subject of nearly half of the questions (45). The authors argue that “this pattern of hyper-attention” (44) to the teacher casts the teacher as “solely responsible” for the success or failure of the course (43). As a result, in the authors’ view, students receive a distorted view of agency in a learning situation. In particular, they are discouraged from seeing themselves as having an active role in their own learning (35).

The authors contend that assigning so much agency to a single individual runs counter to “posthumanist” views of how agency operates in complex social and institutional settings (36). In this view, many factors, including not only all participants and their histories and interests but also the environment and even the objects in the space, play a part in what happens in a classroom (36). When SET questions fail to address this complexity, the authors posit, issues of validity arise when students are asked to pass judgment on subjective and ambiguously defined qualities as well as on factors beyond the control of any participant (40). Students encouraged to focus on instructor agency may also misjudge teaching that opts for modern “de-center[ed]” teaching methods rather than the lecture-based instruction they expect (44).

Ray et al. note that some programs ask students about their own level of interest and willingness to participate in class activities and advocate increased use of such questions (45). But they particularly advocate replacing the emphasis on teacher agency with questions that encourage students to assess their own contributions to their learning experience as well as to examine the class experience as a whole and to recognize the “relational” aspects of a learning environment (46). For example:

Instead of asking whether instructors stimulated critical thought, it seems more reasonable to ask if students engaged in critical thinking, regardless of who or what facilitated engagement. (46; emphasis original)

Ray et al. conclude that questions that isolate instructors’ contributions should lean toward those that can be objectively defined and rated, such as punctuality and responding to emails in a set time frame (46).

The authors envision improved SETs, like those of some programs, that are based on a program’s stated outcomes and that ask students about the concepts and abilities they have developed through their coursework (48). They suggest that programs in institutions that use “generic” evaluations for broader analysis or that do not allow individual departments to eliminate the official form should develop their own parallel forms in order to gather the kind of information that enables more effective assessment of classroom activity (48-49).

A major goal, in the authors’ view, should be questions that “encourage students to identify the interconnected aspects of classroom agency through reflection on their own learning” (49).

 


Lindenman et al. (Dis)Connects between Reflection and Revision. CCC, June 2018. Posted 07/22/2018.

Lindenman, Heather, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch. “Revision and Reflection: A Study of (Dis)Connections between Writing Knowledge and Writing Practice.” College Composition and Communication 69.4 (2018): 581-611. Print.

Heather Lindenman, Martin Camper, Lindsay Dunne Jacoby, and Jessica Enoch report a “large-scale, qualitative assessment” (583) of students’ responses to an assignment pairing reflection and revision in order to evaluate the degree to which reflection and revision inform each other in students’ writing processes.

The authors cite scholarship designating reflection and revision “threshold concepts important to effective writing” (582). Scholarship suggests that reflection should encourage better revision because it “prompts metacognition,” defined as “knowledge of one’s own thinking processes and choices” (582). Lindenman et al. note the difficulties faced by teachers who recognize the importance of revision but struggle to overcome students’ reluctance to revise beyond surface-level correction (582). The authors conclude that engagement with the reflective requirements of the assignment did not guarantee effective revision (584).

The study team consisted of six English 101 instructors and four writing program administrators (587). The program had created a final English 101 “Revision and Reflection Assignment” in which students could draw on shorter memos on the four “linked essays” they wrote for the class. These “reflection-in-action” memos, using the terminology of Kathleen Blake Yancey, informed the final assignment, which asked for a “reflection-in-presentation”: students could choose one of their earlier papers for a final revision and write an extended reflection piece discussing their revision decisions (585).

The team collected clean copies of this final assignment from twenty 101 sections taught by fifteen instructors. A random sample across the sections resulted in a study size of 152 papers (586). Microsoft Word’s “compare document” feature allowed the team to examine students’ actual revisions.

In order to assess the materials, the team created a rubric judging the revisions as either “substantive, moderate, or editorial.” A second rubric allowed them to classify the reflections as “excellent, adequate, or inadequate” (586). Using a grounded-theory approach, the team developed forty codes to describe the reflective pieces (587). The study goal was to determine how well students’ accounts of their revisions matched the revisions they actually made (588).

The article includes the complete Revision and Reflection Assignment as well as a table reporting the assessment results; other data are available online (587). The assignment called for specific features in the reflection, which the authors characterize as “narrating progress, engaging teacher commentary, and making self-directed choices” (584).

The authors report that 28% of samples demonstrated substantive revision, while 44% showed moderate revision and 28% editorial revision. The reflection portion of the assignment garnered 19% excellent responses, 55% that were adequate, and 26% that were inadequate (587).

The “Narrative of Progress” invites students to explore the skills and concepts they feel they have incorporated into their writing process over the course of the semester. Lindenman et al. note that such narratives have been critiqued for inviting students to write “ingratiat[ing]” responses that they think teachers want to hear as well as for encouraging students to emphasize “personal growth” rather than a deeper understanding of rhetorical possibilities (588).

They include an example of a student who wrote about his struggles to develop stronger theses and who, in fact, showed considerable effort to address this issue in his revision, as well as an example of a student who wrote about “her now capacious understanding of revision in her memo” but whose “revised essay does not carry out or enact this understanding” (591). The authors report finding “many instances” where students made such strong claims but did not produce revisions that “actualiz[ed] their assertions” 591. Lindenman et al. propose that such students may have increased in their awareness of concepts, but that this awareness “was not enough to help them translate their new knowledge into practice within the context of their revisions” (592).

The section of student response to teacher commentary distinguishes between students for whom teachers’ comments served as “a heuristic” that allowed the student to take on roles as “agents” and the “majority” of students, who saw the comments as “a set of directions to follow” (592). Students who made substantive revisions, according to the authors, were able to identify issues called up the teacher feedback and respond to these concerns in the light of their own goals (594). While students who made “editorial” changes actually mentioned teacher comments more often (595), the authors point to shifts to first person in the reflective memos paired with visible revisions as an indication of student ownership of the process (593).

Analysis of “self-directed metacognitive practice” similarly found that students whose strong reflective statements were supported by actual revision showed evidence of “reach[ing] beyond advice offered by teachers or peers” (598). The authors note that, in contrast, “[a]nother common issue among self-directed, nonsubstantive revisers” was the expenditure of energy in the reflections to “convince their instructors that the editorial changes they made throughout their essays were actually significant” (600; emphasis original).

Lindenman et al. posit that semester progress-narratives may be “too abstracted from the actual practice of revision” and recommend that students receive “intentional instruction” to help them see how revision and reflection inform each other (601). They report changes to their assignment to foreground “the why of revision over the what” (602; emphasis original), and to provide students with a visual means of seeing their actual work via “track changes” or “compare documents” while a revision is still in progress (602).

A third change encourages more attention to the interplay between reflection and revision; the authors propose a “hybrid threshold concept: reflective revision” (604; emphasis original).

The authors find their results applicable to portfolio grading, in which, following the advice of Edward M. White, teachers are often encouraged to give more weight to the reflections than to the actual texts of the papers. The authors argue that only by examining the two components “in light of each other” can teachers and scholars fully understand the role that reflection can play in the development of metacognitive awareness in writing (604; emphasis original).

 


Donahue & Foster-Johnson. Text Analysis for Evidence of Transfer. RTE, May 2018. Posted 07/13/2018.

Donahue, Christiane, and Lynn Foster-Johnson. “Liminality and Transition: Text Features in Postsecondary Student Writing.” Research in the Teaching of English 52.4 (2018): 359-381. Web. 4 July 2018.

Christiane Donahue and Lynn Foster-Johnson detail a study of student writing in the “liminal space” between a “generic” first-year-writing course and a second, “discipline-inspired” first-year seminar (365). They see their study as unusual in that it draws its data and conclusions from empirical “corpus analysis” of the texts students produce (376-77). They also present their study as different from much other research in that it considered a “considerably larger” sample that permits them to generalize about the broader population of the specific institution where the study took place (360).

The authors see liminal spaces as appropriate for the study of the issue usually referred to as “transfer,” which they see as a widely shared interest across composition studies (359). They contend that their study of “defined features” in texts produced as students move from one type of writing course to another allows them to identify “just-noticeable difference[s]” that they believe can illuminate how writing develops across contexts (361).

The literature review examines definitions of liminality as well as wide-ranging writing scholarship that attempts to articulate how knowledge created in one context changes as it is applied in new situations. They cite Linda Adler-Kassner’s 2014 contention that students may benefit from “learning strategy rather than specific writing rules or forms,” thus developing the ability to adapt to a range of new contexts (362).

One finding from studies such as that of Lucille McCarthy in 1987 and Donahue in 2010 is that while students change the way they employ knowledge as they move from first to final years of education, they do not seem fully aware of how their application of what they know has changed (361-62). Thus, for Donahue and Foster-Johnson, the actual features detectable in the texts themselves can be illuminating in ways that other research methodologies may not (362, 364).

Examining the many terms that have been used to denote “transfer,” Donahue and Foster-Johnson advocate for “models of writing knowledge reuse” and “adaptation,” which capture the recurrence of specific features and the ways these features may change to serve a new exigency (364).

The study took place in a “selective” institution (366) defined as a “doctoral university of high research activity” (365). The student population is half White, with a diverse range of other ethnicities, and 9% first-generation college students (366). Students take either one or two sections of general first-year writing, depending on needs identified by directed self-placement (366), and a first-year seminar that is “designed to teach first-year writing while also introducing students to a topic in a particular (inter)discipline and gesturing toward disciplinary writing” (365). The authors argue that this sequence provides a revealing “’bridge’ moment in students’ learning” (365).

Students were thus divided into three cohorts depending on which courses they took and in which semester. Ninety percent of the instructors provided materials, collecting “all final submitted drafts of the first and last ‘source-based’ papers” for 883 students. Fifty-two papers from each cohort were randomly chosen, resulting in 156 participants (366-67). Each participating student’s work was examined at four time points, with the intention of identifying the presence or absence of specific features (368).

The features under scrutiny were keyed to faculty-developed learning outcomes for the courses (367-68). The article discusses the analysis of seven: thesis presence, thesis type, introduction type, overall text structure, evidence types, conclusion type, and overall essay purpose (367). Each feature was further broken down into “facets,” 38 in all, that illustrated “the specific aspects of the feature” (367-68).

The authors provide detailed tables of their results and list findings in their text. They report that “the portrait is largely one of stability,” but note students’ ability to vary choices “when needed” (369). Statistically significant differences showing “change[s] across time” ranged from 13% in Cohort 1 to 29% in Cohort 2 and 16% in Cohort 3. An example of a stable strategy is the use of “one explicit thesis at the beginning” of a paper (371); a strategy “rarely” used was “a thesis statement [placed] inductively at the middle or end” (372). Donahue and Foster-Johnson argue that these results indicate that students had learned useful options that they could draw on as needed in different contexts (372).

The authors present a more detailed examination of the relationship between “thesis type” and “overall essay aim” (374). They give examples of strong correlations between, for example, “the purpose of analyzing an object” and the use of “an interpretive thesis” as well as negative correlations between, for example, “the purpose of analyzing an object” and “an evaluative thesis” (374). In their view, these data indicate that some textual features are “congruen[t]” with each other while others are “incompatible” (374). They find that their textual analysis documents these relationships and students’ reliance on them.

They note a “reset effect”: in some cases, students increased their use of a facet (e.g., “external source as authority”) over the course of the first class, but then reverted to using the facet less at the beginning of the second class, only to once again increase their reliance on such strategies as the second class progressed (374-75), becoming, “‘repeating newcomers’ in the second term” (374).

Donahue and Foster-Johnson propose as one explanation for the observed stability the possibility that “more stays consistent across contexts than we might readily acknowledge” (376), or that in general-education contexts in which exposure to disciplinary writing is preliminary, the “boundaries we imagine are fuzzy” (377). They posit that it is also possible that curricula may offer students mainly “low-road” opportunities for adaptation or transformation of learned strategies (377). The authors stress that in this study, they were limited to “what the texts tell us” and thus could not speak to students’ reasons for their decisions (376).

Questions for future research, they suggest, include whether students are aware of deliberate reuse of strategies and whether or not “students reusing features do so automatically or purposefully” (377). Research might link student work to particular students with identifiers that would enable follow-up investigation.

They argue that compared to the methods of textual analysis and “topic-modeling” their study employs, “current assessment methods . . . are crude in their construct representation and antiquated in the information they provide” (378). They call for “a new program of research” that exploits a new

capability to code through automated processes and allow large corpora of data to be uploaded and analyzed rapidly under principled categories of analysis. 378

 


Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


Webber, Jim. Reframing vs. Artful Critique of Reform. Sept. CCC, 2017. Posted 10/31/2017.

Webber, Jim. ”Toward an Artful Critique of Reform: Responding to Standards, Assessment, and Machine Scoring.” College Composition and Communication 69.1 (2017): 118-45. Print.

Jim Webber analyzes the responses of composition scholars to the reform movement promoted by entities like College Learning Assessment (CLA) and Complete College America (CCA). He notes that the standardization agenda of such groups, intended to improve the efficiency of higher education, has suffered setbacks; for example, many states have rejected the Common Core State Standards (118-19). However, in Webber’s view, these setbacks are temporary and will be followed by renewed efforts by testing and measurement agencies to impose their own criteria for student success (119).

The standardization these groups urge on higher education will, they claim, give parents and students better information about institutions and will ultimately serve as grounds for such moves as “performance funding” (119). The overall goal of such initiatives is to move students through college as quickly as possible, especially into majors (119).

Webber recognizes two prongs of composition’s response to such pressures to portray “college students and parents as consumers” (119). One thread urges “reframing” or “redirecting” the efforts of the testing industry and groups like CLA and CCA. For Webber, this viewpoint adopts a “realist style.” Scholars who espouse reframing urge that compositionists work within the current realities created by the power of the testing and standardization apparatus to “expand” the meanings of terms like “college readiness” (120), adjusting them in ways that reflect composition’s inclusive, humanistic values (122)–that is, in Frank Farmer’s term, “insinuat[ing]” the professional ethos of composition and its authority into the standardization apparatus (qtd. in Webber 122).

Scholars who adopt this realist style, Webber claims, “figur[e] public policy as accommodation to the world” (141n5); moreover, in Webber’s view, they accept the description of “the way the world is” (133) put forward by CCA and others as “irreducibly competitive” and thus “[reduce] the scope of policy values to competition, efficiency, and instrumentality” (141n5).

Webber cites scholars in this vein who contend that the protests of scholars and writing professionals have been and will be effectively “ignored” by policymakers (137). More productive, in this view, is collaboration that will at least provide “a seat at the policy table,” giving professionals a chance to infuse the debate with their values (133).

Webber presents the 2011 Framework for Success in Postsecondary Writing as an example of how the reframing position “work[s] within the limits established by the dominant discourse of reform” (123). He notes that Bruce Comiskey was unable to discern any “apparent difference” between the aspirations of the Framework and those of the reform movement (125; emphasis original). For Webber, this approach sets up composition professionals as “competition” for the testing industry as the experts who can make sure students meet the reformers’ criteria for successful learning (124). Reframing in this way, Webber says, requires “message management” (123) to make sure that the response’s “strategic” potential is sustained (121).

Scholars who urge reframing invoke Cornel West’s “prophetic pragmatism” (122), which requires them to:

think genealogically about specific practices in light of the best available social theories, cultural critiques, and historiographic insights and to act politically to achieve certain moral consequences in light of effective strategies and tactics. (qtd. in Webber 122)

Webber contends that reframers interpret this directive to mean that “public critique” by compositionists “cannot deliver the consequences they desire” (123; emphasis original). Thus, a tactical approach is required.

The second thread in compositionists’ response to the reform movement is that of critique that insists that allowing the reform industry to set the terms and limits of the discussion is “to grant equivalence between our professional judgments and those of corporate-political service providers” (125-26). Webber quotes Judith Summerfield and Philip M. Anderson, who argue that “managing behavior and preparing students for vocations” does not accord with “a half-century (at the least) of enlightened classroom study and socio-psycholinguistic research” (qtd. in Webber 125).

In Webber’s view, the strands of reframing and critique have reached a “stalemate” (126). In response to the impasse, Webber explores the tradition of pragmatism, drawing on John Dewey and others. He argues that reframers call on the tenets of “melioration” and “prophetic critique” (127). “Meliorism,” according to Webber’s sources, is a linguistic process in that it works toward improving conditions through addressing the public discourse (127). In discussing West’s prophetic pragmatism as a form of “critical melioration,” Webber focuses on the “artfulness” of West’s concept (128).

Webber sees artfulness as critique “in particular contexts” in which ordinary people apply their own judgments of the consequences of a theory or policy based on the effects of these theories or policies on their lives (128-29). An artful critique invites public participation in the assessment of policies, an interaction that, according to West, functions as “antiprofessionalism,” not necessarily for the purpose of completely “eliminating or opposing all professional elites” but rather to “hold them to account” (qtd. in Webber 129).

Webber argues that proponents of reframing within composition have left out this aspect of West’s pragmatism (128). Webber’s own proposal for an artful critique involves encouraging such active participation by the publics actually affected by policies. He contends that policymakers will not be able to ignore students and parents as they have composition professionals (137).

His approach begins with “scaling down” by inviting public inquiry at a local level, then “scaling up” as the conversation begins to trigger broader responses (130). He presents the effects of student protests as the University of Missouri in 2015 as an example of how local action that challenges the power of elites can have far-reaching consequences (137-38). Compositionists, he maintains, should not abandon critique but should “expand our rhetoric of professionalism to engage the antiprofessional energy of local inquiry and resistance” (138).

As a specific application of his view, Webber provides examples of how composition professionals have enlisted public resistance to machine-scoring of student writing. As students experience “being read” by machines, he contends, they become aware of how such policies do not mesh with their concerns and experiences (137). This awareness engages them in critically “problematizing” their perspectives and assumptions (131). In the process, Webber argues, larger, more diverse audiences are encouraged to relate their own experiences, leading to “a broader public discussion of shared concerns” (131).

For Webber, drawing on the everyday judgments of ordinary people as to the value of policies put forward by professionals contrasts with the desire to align composition’s values with those of the standardization movement in hopes of influencing the latter from within. Opening the debate beyond strategic professionalism can generate a pragmatism that more nearly fits West’s prophetic ideals and that can “unsettle the inevitability of reform and potentially authorize composition’s professional perspectives” in ways that reframing the terms of the corporate initiatives cannot (135).

 

 


Stewart, Mary K. Communities of Inquiry in Technology-Mediated Activities. C&C, Sept. 2017. Posted 10/20/2017.

Stewart, Mary K. “Communities of Inquiry: A Heuristic for Designing and Assessing Interactive Learning Activities in Technology-Mediated FYC.” Computers and Composition 45 (2017): 67-84. Web. 13 Oct. 2017.

Mary K. Stewart presents a case study of a student working with peers in an online writing class to illustrate the use of the Community of Inquiry framework (CoI) in designing effective activities for interactive learning.

Stewart notes that writing-studies scholars have both praised and questioned the promise of computer-mediated learning (67-68). She cites scholarship contending that effective learning can take place in many different environments, including online environments (68). This scholarship distinguishes between “media-rich” and “media-lean” contexts. Media-rich environments include face-to-face encounters and video chats, where exchanges are immediate and are likely to include “divergent” ideas, whereas media-lean situations, like asynchronous discussion forums and email, encourage more “reflection and in-depth thinking” (68). The goal of an activity can determine which is the better choice.

Examining a student’s experiences in three different online environments with different degrees of media-richness leads Steward to argue that it is not the environment or particular tool that results in the success or failure of an activity as a learning experience. Rather, in her view, the salient factor is “activity design” (68). She maintains that the CoI framework provides “clear steps” that instructors can follow in planning effective activities (71).

Stewart defined her object of study as “interactive learning” (69) and used a “grounded theory” methodology to analyze data in a larger study of several different course types. Interviews of instructors and students, observations, and textual analysis led to a “core category” of “outcomes of interaction” (71). “Effective” activities led students to report “constructing new knowledge as a result of interacting with peers” (72). Her coding led her to identify “instructor participation” and “rapport” as central to successful outcomes; reviewing scholarship after establishing her own grounded theory, Stewart found that the CoI framework “mapped to [her] findings” (71-72).

She reports that the framework involves three components: social presence, teaching presence, and cognitive presence. Students develop social presence as they begin to “feel real to one another” (69). Stewart distinguishes between social presence “in support of student satisfaction,” which occurs when students “feel comfortable” and “enjoy working” together, and social presence “in support of student learning,” which follows when students actually value the different perspectives a group experience offers (76).

Teaching presence refers to the structure or design that is meant to facilitate learning. In an effective CoI activity, social and teaching presence are required to support cognitive presence, which is indicated by “knowledge construction,” specifically “knowledge that they would not have been able to construct without interacting with peers” (70).

For this article, Stewart focused on the experiences of a bilingual Environmental Studies major, Nirmala, in an asynchronous discussion forum (ADF), a co-authored Google document, and a synchronous video webinar (72). She argues that Nirmala’s experiences reflect those of other students in the larger study (72).

For the ADF, students were asked to respond to one of three questions on intellectual property, then respond to two other students who had addressed the other questions. The prompt specifically called for raising new questions or offering different perspectives (72). Both Nirmala and Steward judged the activity as effective even though it occurred in a media-lean environment because in sharing varied perspectives on a topic that did not have a single solution, students produced material that they were then able to integrate into the assigned paper (73):

The process of reading and responding to forum posts prompted critical thinking about the topic, and Nirmala built upon and extended the ideas expressed in the forum in her essay. . . . [She] engaged in knowledge construction as a result of interacting with her peers, which is to say she engaged in “interactive learning” or a “successful community of inquiry.” (73)

Stewart notes that this successful activity did not involve the “back-and-forth conversation” instructors often hope to encourage (74).

The co-authored paper was deemed not successful. Stewart contends that the presence of more immediate interaction did not result in more social presence and did not support cognitive presence (74). The instructions required two students to “work together” on the paper; according to Nirmala’s report, co-authoring became a matter of combining and editing what the students had written independently (75). Stewart writes that the prompt did not establish the need for exploration of viewpoints before the writing activity (76). As a result, Nirmala felt she could complete the assignment without input from her peer (76).

Though Nirmala suggested that the assignment might have worked better had she and her partner met face-to-face, Stewart argues from the findings that the more media-rich environment in which the students were “co-present” did not increase social presence (75). She states that instructors may tend to think that simply being together will encourage students to interact successfully when what is actually needed is more attention to the activity design. Such design, she contends, must specifically clarify why sharing perspectives is valuable and must require such exploration and reflection in the instructions (76).

Similarly, the synchronous video webinar failed to create productive social or cognitive presence. Students placed in groups and instructed to compose group responses to four questions again responded individually, merely “check[ing]” each other’s answers.  Nirmala reports that the students actually “Googled the answer and, like, copy pasted” (Nirmala, qtd. in Stewart 77). Steward contends that the students concentrated on answering the questions, skipping discussion and sharing of viewpoints (77).

For Stewart, these results suggest that instructors should be aware that in technology-mediated environments, students take longer to become comfortable with each other, so activity design should build in opportunities for the students to form relationships (78). Also, prompts can encourage students to share personal experiences in the process of contributing individual perspectives. Specifically, according to Stewart, activities should introduce students to issues without easy solutions and focus on why sharing perspectives on such issues is important (78).

Stewart reiterates her claim that the particular technological environment or tool in use is less important than the design of activities that support social presence for learning. Even in media-rich environments, students placed together may not effectively interact unless given guidance in how to do so. Stewart finds the CoI framework useful because it guides instructors in creating activities, for example, by determining the “cognitive goals” in order to decide how best to use teaching presence to build appropriate social presence. The framework can also function as an assessment tool to document the outcomes of activities (79). She provides a step-by-step example of CoI in use to design an activity in an ADF (79-81).

 


Carter and Gallegos. Assessing Celebrations of Student Writing. CS, Spring 2017. Posted 09/03/2017.

Carter, Genesea M., and Erin Penner Gallegos. “Moving Beyond the Hype: What Does the Celebration of Student Writing Do for Students?” Composition Studies 45.1 (2017): 74-98. Web. 29 Aug. 2017.

Genesea M. Carter and Erin Penner Gallegos present research on “celebrations of student writing (CSWs)” (74), arguing that while extant accounts of these events portray them as positive and effective additions to writing programs, very little research has addressed students’ own sense of the value of the CSW experience. To fill this gap, Carter and Gallegos interviewed 23 students during a CSW at the University of New Mexico (UNM) and gathered data from an anonymous online survey (84).

As defined by Carter and Gallegos, a CSW asks students to represent the writing from their coursework in a public forum through posters and art installations (77). Noting that the nature of a CSW is contingent on the particular institution at which it takes place (75, 91), the authors provide specific demographic data about UNM, where their research was conducted. The university is both a “federally designated Hispanic Serving Institution (HSI)” and “a Carnegie-designated very high research university” (75), thus incorporating research-level expectations with a population of “historically marginalized,” “financially very needy” students with “lower educational attainment” (76). Carter and Gallegos report on UNM’s relatively low graduation rates as compared to similar universities and the “particular challenges” faced by this academic community (76).

Among these challenges, in the authors’ view, was a “negative framing of the student population from the university community and city residents” (76). Exposure in 2009 via a meeting with Linda Adler-Kassner to the CSW model in place at Eastern Michigan University led graduate students Carter and Gallegos to develop a similar program at UNM (76-77). Carter and Gallegos were intrigued by the promise of programs like the one at EMU to present a new, positive narrative about students and their abilities to the local academic and civic communities.

They recount the history of the UNM CSW as a project primarily initiated by graduate students that continues to derive from graduate-student interests and participation while also being broadly adopted by the larger university and in fact the larger community (78, 92). In their view, the CSW differs from other institutional showcases of student writing such as an undergraduate research day and a volume of essays selected by judges in that it offers a venue for “students who lack confidence in their abilities or who do not already feel that they belong to the university community” (78). They argue that changing the narrative about student writing requires a space for recognizing the strengths of such historically undervalued students.

Examining CSWs from a range of institutions in order to discover what the organizers believe these events achieve, the authors found “a few commonalities” (79). Organizers underscored their belief that the audience engagement offered by a CSW enforced the nature of writing as “social, situational, and public,” a “transactional” experience rather than the “one-dimensional” model common in academic settings (80). Further, CSWs are seen to endorse student contributions to research across the university community and to inspire recognition of the multiple literacies that students bring to their academic careers (81). The authors’ review also reveals organizers’ beliefs that such events will broaden students’ understanding of the writing process by foregrounding how writing evolves through revision into different modes (81).

An important thread is the power of CSWs to enhance students’ “sense of belonging, both to an intellectual and a campus community” (82). Awareness that their voices are valued, according to the authors’ research, is an important factor in student persistence among marginalized populations (81). Organizers see CSWs as encouraging students to see themselves as “authors within a larger community discourse” (83).

Carter and Gallegos note a critique by Mark Mullen, who argues that CSWs can actually exploit student voices in that they may actually be a “celebration of the teaching of writing, a reassertion of agency by practitioners who are routinely denigrated” (qtd. in Carter and Gallegos 84). The authors find from their literature review that, indeed, few promotions of CSWs in the literature include student voices (84). They contend that their examination of student perceptions of the CSW process can further understanding of the degree to which these events meet their intended outcomes (84).

Their findings support the expectation that students will find the CSW valuable, but discovered several ways in which the hopes of supporters and the responses of students are “misaligned” (90). While the CSW did contribute to students’ sense of writing as a social process, students expressed most satisfaction in being able to interact with their peers, sharing knowledge and experiencing writing in a new venue as fun (86). Few students understood how CSW connected to the goals of their writing coursework, such as providing a deeper understanding of rhetorical situation and audience (87). While students appreciated the chance to “express” their views, the authors write that students “did not seem to relate expression to being heard or valued by the academic community” or to “an extension of agency” (88).

For the CSW to more clearly meet its potential, the authors recommend that planners at all levels focus on building metacognitive awareness of the pedagogical value of such events through classroom activities (89). Writing programs involved in CSWs, according to the authors, can develop specific outcomes beyond those for the class as a whole that define what supporters and participants hope the event will achieve (89-90). Students themselves should be involved in planning the event as well as determining its value (90), with the goal of “emphasizing to their student participants that the CSW is not just another fun activity but an opportunity to share their literacies and voices with their classmates and community” (90).

A more detailed history of the development of the UNM event illustrates how the CSW became increasingly incorporated into other university programs and how it ultimately drew participation from local artists and performers (92-93). The authors applaud this “institutionalizing” of the event because such broad interest and sponsorship mean that the CSW can continue to grow and spread knowledge of student voices to other disciplines and across the community (93).

They see “downsides” in this expansion in that the influence of different sponsors from year to year and attachment to initiatives outside of writing tends to separate the CSW from the writing courses it originated to serve. Writing programs in venues like UNM may find it harder to develop appropriate outcomes and assess results, making sure that the CSW remains a meaningful part of a writing program’s mission (93). The authors recommend that programs hoping that a CSW will enhance actual writing instruction should commit adequate resources and attention to the ongoing events. The authors write that, “imperatively,” student input must be part of the process in order to prevent such events from “becom[ing] merely another vehicle for asserting the value of the teaching of writing” (94; emphasis original).