College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals

Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).

Author: vanderso

I'm a recently retired associate professor of English in Southern Indiana. I've been teaching writing for twenty-five years, but I feel I have much to learn about how people really learn to write. In this blog, I'll be sharing research and thoughts and hopefully gathering information from others about the process of learning to write.

Comments are closed.