College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Nazzal et al. Curriculum for Targeted Instruction at a Community College. TETYC, Mar. 2020. Posted 06/11/2020.

Nazzal, Jane S., Carol Booth Olson, and Huy Q. Chung. “Differences in Academic Writing across Four Levels of Community College Composition Courses.” Teaching English in the Two-Year College 47.3 (2020): 263-96. Print.

Jane S. Nazzal, Carol Booth Olson, and Huy Q. Chung present an assessment tool to help writing educators design curriculum during a shift from faculty-scored placement exams and developmental or “precollegiate” college courses (263) to what they see as common reform options (264-65, 272).

These options, they write, often include directed self-placement (DSP), while preliminary courses designed for students who might struggle with “transfer-level” courses are often replaced with two college-level courses, one with an a concurrent support addition for students who feel they need extra help, and one without (265). At the authors’ institution, “a large urban community college in California” with an enrollment of 50,000 that is largely Hispanic and Asian, faculty-scored exams placed 15% of the students into the transfer-level course; after the implementation of DSP, 73% chose the transfer course, 12% the course with support, and the remaining 15% the precollegiate courses (272).

The transition to DSP and away from precollegiate options, according to Nazzal et al., resulted from a shift away from “access” afforded by curricula intended to help underprepared students toward widespread emphasis on persistence and time to completion (263). The authors cite scholarship contending that processes that placed students according to faculty-scored assessments incorrectly placed one-third to one-half of students and disparately affected minority students; fewer than half of students placed into precollegiate courses reach the transfer-level course (264).

In the authors’ view, the shift to DSP as a solution for these problems creates its own challenges. They contend that valuable information about student writing disappears when faculty no longer participate in placement processes (264). Moreover, they question the reliability of high-school grades for student decisions, arguing that high school curriculum is often short on writing (265). They cite “burden-shifting” when the responsibility for making good choices is passed to students who may have incomplete information and little experience with college work (266). Noting as well that lower income students may opt for the unsupported transfer course because of the time pressure of their work and home lives, the authors see a need for research on how to address the specific situations of students who opt out of support they may need (266-67).

The study implemented by Nazzal et al. attempts to identify these specific areas that affect student success in college writing in order to facilitate “explicit teaching” and “targeted instruction” (267). They believe that their process identifies features of successful writing that are largely missing from the work of inexperienced writers but that can be taught (268).

The authors review cognitive research on the differences between experienced and novice writers, identifying areas like “Writing Objectives,” “Revision,” and “Sense of Audience” (269-70). They present “[f]oundational [r]esearch” that compares the “writer-based prose” of inexpert writers with the “reader-based prose” of experts (271), as well as the whole-essay conceptualization of successful writers versus the piecemeal approach of novices, among other differentiating features (269).

The study was implemented during the first two weeks of class over two semesters, with eight participating faculty teaching thirteen sections. Two hundred twenty-five students from three precollegiate levels and the single transfer-level course completed the tasks. The study essays were similar to the standard college placement essays taken by most of the students in that they were timed responses to prompts, but for the study, students were asked to read two pieces and “interpret, and synthesize” them in their responses (272-73). One piece was a biographical excerpt (Harriet Tubman or Louie Zamperini, war hero) and the other a “shorter, nonfiction article outlining particular character qualities or traits,” one discussing leadership and the other resilience (274). The prompts asked students to choose a single trait exhibited by the subject that most contributed to his or her success (274).

In the first of two 45-minute sessions, teachers read the pieces aloud while students followed along, then gave preliminary guidance using a graphical organizer. In the second session, students wrote their essays. The essays were rated by experienced writing instructors trained in scoring, using criteria for “high-school writing competency” based on principles established by mainstream composition assessment models (273-74).

Using “several passes through the data,” the lead researcher examined a subset of 76 papers that covered the full range of scores in order to identify features that were “compared in frequency across levels.” Differences in the frequency of these features were analyzed for statistical significance across the four levels (275). A subsample of 18 high-scoring papers was subsequently analyzed for “distinguishing elements . . . that were not present in lower-scoring papers,” including some features that had not been previously identified (275).

Nine features were compared across the four levels; the authors provide examples of presence versus absence of these features (276-79). Three features differed significantly in their frequency in the transfer-level course versus the precollegiate courses: including a clear claim, responding to the specific directions of the prompt, and referring to the texts (279).

Nazzal et al. also discovered that a quarter of the students placed in the transfer-level course failed to refer to the text, and that only half the students in that course earning passing scores, indicating that they had not incorporated one or more of the important features. They concluded that students at all levels would benefit from a curriculum targeting these moves (281).

Writing that only 9% of the papers scored in the “high” range of 9-12 points, Nazzal et al. present an annotated example of a paper that includes components that “went above and beyond the features that were listed” (281). Four distinctive features of these papers were

(1) a clear claim that is threaded throughout the paper; (2) a claim that is supported by relevant evidence and substantiated with commentary that discusses the significance of the evidence; (3) a conclusion that ties back to the introduction; and (4) a response to all elements of the prompt. (282)

Providing appendices to document their process, Nazzal et al. offer recommendations for specific “writing moves that establish communicative clarity in an academic context” (285). They contend that it is possible to identify and teach the moves necessary for students to succeed in college writing. In their view, their identification of differences in the writing of students entering college with different levels of proficiency suggests specific candidates for the kind of targeted instruction that can help all students succeed.


Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


Klausman et al. TYCA White Paper on Placement Reform. TETYC, Dec. 2o16. Posted 01/28/2017.

Klausman, Jeffrey, Christie Toth, Wendy Swyt, Brett Griffiths, Patrick Sullivan, Anthony Warnke, Amy L. Williams, Joanne Giordano, and Leslie Roberts. “TYCA White Paper on Placement Reform.” Teaching English in the Two-Year College 44.2 (2016): 135-57. Web. 19 Jan. 2017.

Jeffrey Klausman, Christie Toth, Wendy Swyt, Brett Griffiths, Patrick Sullivan, Anthony Warnke, Amy L. Williams, Joanne Giordano, and Leslie Roberts, as members of the Two-Year College Association (TYCA) Research Committee, present a White Paper on Placement Reform. They review current scholarship on placement and present two case studies of two-year colleges that have implemented specific placement models: multiple measures to determine readiness for college-level writing and directed self-placement (DSP) (136).

The authors locate their study in a current moment characterized by a “completion agenda,” which sees as a major goal improving student progress toward graduation, with an increased focus on the role of two-year colleges (136-37). This goal has been furthered by faculty-driven initiatives such as Accelerated Learning Programs but has also been taken on by foundations and state legislatures, whose approach to writing instruction may or may not accord with scholarship on best practices (137). All such efforts to ensure student progress work toward “remov[ing] obstacles” that impede completion, “especially ‘under-placement’” in developmental courses (137).

Efforts to improve placement require alternatives to low-cost, widely available, and widely used high-stakes tests, such as COMPASS. Such tests have not only been shown to be unable to measure the many factors that affect student success; they have also been shown to discriminate against protected student populations (137). In fact, ACT will no longer use COMPASS after the 2015-2016 academic year (137).

Such tests, however, remain “the most common placement process currently in use at two-year colleges” (138); such models are used more frequently at two-year institutions than at four-year ones (139). These models, the Committee reports, also often rely on “Automated Writing Evaluation (AWE) software,” or machine scoring (138). Scholarship has noted that “indirect” measures like standardized tests are poor instruments in placement because they are weak predictors of success and because they cannot be aligned to local curricula and often-diverse local populations (138). Pairing such tests with a writing sample scored with AWE limits assessment to mechanical measures and fails to communicate to students what college writing programs value (138-39).

These processes are especially troublesome at community colleges because of the diverse population at such institutions and because of the particular need at such colleges for faculty who understand the local environment to be involved in designing and assessing the placement process (139). The Committee contends further that turning placement decisions over to standardized instruments and machines diminishes the professional authority of community-college faculty (139).

The authors argue that an effective measure of college writing readiness must be based on more than one sample, perhaps a portfolio of different genres; that it must be rated by multiple readers familiar with the curriculum into which the students are to be placed; and that it must be sensitive to the features and needs of the particular student population (140). Two-year institutions may face special challenges because they may not have dedicated writing program administrations and may find themselves reliant on contingent faculty who cannot always engage in the necessary professional development (140-41).

A move to multiple measures would incorporate “‘soft skills’ such as persistence and time management” as well as “situational factors such as financial stability and life challenges” (141). Institutions, however, may resist change because of cost or because of an “institutional culture” uninformed about best practices. In such contexts, the Committee suggests incremental reform, such as considering high-school GPAs or learning-skills inventories (142).

The case study of a multiple-measures model, which examines Highline Community College in Washington state, reports that faculty were able to overcome institutional resistance by collecting data that confirmed findings from the Community College Research Center (CCRC) at Columbia University showing that placement into developmental courses impeded completion of college-level courses (142). Faculty were able to draw on high-school portfolios, GED scores, and scores on other assessment instruments without substantially increasing costs. The expense of a dedicated placement advisor was offset by measurable student success (143).

The Committee presents Directed Self-Placement (DSP), based on a 1998 article by Daniel Royer and Roger Gilles, as “a principle rather than a specific procedure or instrument”: the concept recognizes that well-informed students can make adequate educational choices (143). The authors note many benefits from DSP: increases in student agency, which encourages responsibility and motivation; better attitudes that enhance the learning environment; and especially an opportunity for students to begin to understand what college writing will entail. Further program benefits include opportunities for faculty to reflect on their curricula (144).

Though “[e]mpirical evidence” is “promising,” the authors find only two studies that specifically address DSP models in use at community colleges. These studies note the “unique considerations” confronting open-admissions institutions, such as “limited resources,” diverse student bodies, and prescriptive state mandates (145).

The case study of Mid Michigan Community College, which implemented DSP in 2002, details how the college drew on some of the many options available for a DSP model, including two different reading scores, sample assignments from the three course options, an online survey about students’ own writing and reading backgrounds, and an advisor consultation (146). Completion results improved substantially without major cost increases. The college is now addressing the effects of a growth surge as well as the need for the model to accommodate students with dual-enrollment credits (146-47).

Other possible reforms at some institutions include “first-week ‘diagnostic’ assignments,” “differentiated instruction” allowing students with some degree of college readiness to complete the capstone project in the credit-bearing course; and various options for “challenging” placement, such as submission of portfolios (147-48). The authors caution that students who already understand the institutional culture—possibly “white, middle-class, traditional-aged students”—are the ones most likely to self-advocate through placement challenges (148).

The Committee reports that the nontraditional students and veterans in many two-year-college populations often do not score well on standardized tests and need measures that capture other factors that predict college success, such as “life experiences” (148). Similarly, the diverse two-year student population is best served by measures that recognize students’ abilities to “shuttle among a wealth of languages, linguistic resources, and modalities,” rather than tests that may well over-place students whose strength is grammar knowledge, such as some international students (149).

The Committee recommends that placement procedures should

  • be grounded in disciplinary knowledge.

  • be developed by local faculty who are supported professionally.

  • be sensitive to effects on diverse student populations.

  • be assessed and validated locally.

  • be integrated into campus-wide efforts to improve student success. (150-51)

The White Paper provides a substantial Works Cited list as a resource for placement reform.