College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


1 Comment

Anderst et al. Accelerated Learning at a Community College. TETYC Sept. 2016. Posted 10/21/2016.

Anderst, Leah, Jennifer Maloy, and Jed Shahar. “Assessing the Accelerated Learning Program Model for Linguistically Diverse Developmental Writing Students.” Teaching English in the Two-Year College 44.1 (2016): 11-31. Web. 07 Oct. 2016.

Leah Anderst, Jennifer Maloy, and Jed Shahar report on the Accelerated Learning Program (ALP) implemented at Queensborough Community College (QCC), a part of the City University of New York system (CUNY) (11) in spring and fall semesters, 2014 (14).

In the ALP model followed at QCC, students who had “placed into remediation” simultaneously took both an “upper-level developmental writing class” and the “credit-bearing first-year writing course” in the two-course first-year curriculum (11). Both courses were taught by the same instructor, who could develop specific curriculum that incorporated program elements designed to encourage the students to see the links between the classes (13).

The authors discuss two “unique” components of their model. First, QCC students are required to take a high-stakes, timed writing test, the CUNY Assessment Test for Writing (CATW), for placement and to “exit remediation,” thus receiving a passing grade for their developmental course (15). Second, the ALP at Queensborough integrated English language learners (ELLs) with native English speakers (14).

Anderst et al. note research showing that in most institutions, English-as-a-second-language instruction (ESL) usually occurs in programs other than English or writing (14). The authors state that as the proportion of second-language learners increases in higher education, “the structure of writing programs often remains static” (15). Research by Shawna Shapiro, they note, indicates that ELL students benefit from “a non-remedial model” (qtd. in Anderst et al. 15), validating the inclusion of ELL students in the ALP at Queensborough.

Anderst et al. review research on the efficacy of ALP. Crediting Peter Adams with the concept of ALP in 2007 (11), the authors cite Adams’s findings that such programs have had “widespread success” (12), notably in improving “passing rate[s] of basic writing students,” improving retention, and accelerating progress through the first-year curriculum (12). Other research supports the claim that ALP students are more successful in first- and second-semester credit-bearing writing courses than developmental students not involved in such programs. although data on retention are mixed (12).

The authors note research on the drawbacks of high-stakes tests like the required exit-exam at QCC (15-16) but argue that strong student scores on this “non-instructor-based measurement” (26) provided legitimacy for their claims that students benefit from ALPs (16).

The study compared students in the ALP with developmental students not enrolled in the program. English-language learners in the program were compared both with native speakers in the program and with similar ELL students in specialized ESL courses. Students in the ALP classes were compared with the general cohort of students in the credit-bearing course, English 101. Comparisons were based on exit-exam scores and grades (17). Pass rates for the exam were calculated before and after “follow-up workshops” for any developmental student who did not pass the exam on the first attempt (17).

Measured by pass and withdrawal rates, Anderst et al. report, ALP students outperformed students in the regular basic writing course both before and after the workshops, with ELL students in particular succeeding after the follow-up workshops (17-18). They report a fall-semester pass rate of 84.62% for ELL students enrolled in the ALP after the workshop, compared to a pass rate of 43.4% for ELL students not participating in the program (19).

With regard to grades in English 101, the researchers found that for ALP students, the proportion of As was lower than for the course population as a whole (19). However, this difference disappeared “when the ALP cohort’s grades were compared to the non-ALP cohort’s grades with English 101 instructors who taught ALP courses” (19). Anderst et al. argue that comparing grades given to different cohorts by the same instructors is “a clearer measure” of student outcomes (19).

The study also included an online survey students took in the second iteration of the study in fall 2014, once at six weeks and again at fourteen weeks. Responses of students in the college’s “upper-level developmental writing course designed for ESL students” were compared to those of students in the ALP, including ELL students in this cohort (22).

The survey asked about “fit”—whether the course was right for the student—and satisfaction with the developmental course, as well as its value as preparation for the credit-bearing course (22). At six weeks, responses from ALP students to these questions were positive. However, in the later survey, agreement on overall sense of “fit” and the value of the developmental course dropped for the ALP cohort. For students taking the regular ESL course, however, these rates of agreement increased, often by large amounts (23).

Anderst et al. explain these results by positing that at the end of the semester, ALP students, who were concurrently taking English 101, had come to see themselves as “college material” rather than as remedial learners and no longer felt that the developmental course was appropriate for their ability level (25). Students in one class taught by one of the researchers believed that they were “doing just as well, if not better in English 101 as their peers who were not also in the developmental course” (25). The authors consider this shift in ALP students’ perceptions of themselves as capable writers an important argument for ALP and for including ELL students in the program (25).

Anderst et al. note that in some cases, their sample was too small for results to rise to statistical significance, although final numbers did allow such evaluation (18). They also note that the students in the ALP sections whose high-school GPAs were available had higher grades than the “non-ALP” students (20). The ALP cohort included only students “who had only one remedial need in either reading or writing”; students who placed into developmental levels in both areas found the ALP work “too intensive” (28n1).

The authors recommend encouraging more open-ended responses than they received to more accurately account for the decrease in satisfaction in the second survey (26). They conclude that “they could view this as a success” because it indicated the shift in students’ views of themselves:

This may be particularly significant for ELLs within ALP because it positions them both institutionally and psychologically as college writers rather than isolating them within an ESL track. (26)


Leave a comment

Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).