Klausman, Jeffrey, Christie Toth, Wendy Swyt, Brett Griffiths, Patrick Sullivan, Anthony Warnke, Amy L. Williams, Joanne Giordano, and Leslie Roberts. “TYCA White Paper on Placement Reform.” Teaching English in the Two-Year College 44.2 (2016): 135-57. Web. 19 Jan. 2017.
Jeffrey Klausman, Christie Toth, Wendy Swyt, Brett Griffiths, Patrick Sullivan, Anthony Warnke, Amy L. Williams, Joanne Giordano, and Leslie Roberts, as members of the Two-Year College Association (TYCA) Research Committee, present a White Paper on Placement Reform. They review current scholarship on placement and present two case studies of two-year colleges that have implemented specific placement models: multiple measures to determine readiness for college-level writing and directed self-placement (DSP) (136).
The authors locate their study in a current moment characterized by a “completion agenda,” which sees as a major goal improving student progress toward graduation, with an increased focus on the role of two-year colleges (136-37). This goal has been furthered by faculty-driven initiatives such as Accelerated Learning Programs but has also been taken on by foundations and state legislatures, whose approach to writing instruction may or may not accord with scholarship on best practices (137). All such efforts to ensure student progress work toward “remov[ing] obstacles” that impede completion, “especially ‘under-placement’” in developmental courses (137).
Efforts to improve placement require alternatives to low-cost, widely available, and widely used high-stakes tests, such as COMPASS. Such tests have not only been shown to be unable to measure the many factors that affect student success; they have also been shown to discriminate against protected student populations (137). In fact, ACT will no longer use COMPASS after the 2015-2016 academic year (137).
Such tests, however, remain “the most common placement process currently in use at two-year colleges” (138); such models are used more frequently at two-year institutions than at four-year ones (139). These models, the Committee reports, also often rely on “Automated Writing Evaluation (AWE) software,” or machine scoring (138). Scholarship has noted that “indirect” measures like standardized tests are poor instruments in placement because they are weak predictors of success and because they cannot be aligned to local curricula and often-diverse local populations (138). Pairing such tests with a writing sample scored with AWE limits assessment to mechanical measures and fails to communicate to students what college writing programs value (138-39).
These processes are especially troublesome at community colleges because of the diverse population at such institutions and because of the particular need at such colleges for faculty who understand the local environment to be involved in designing and assessing the placement process (139). The Committee contends further that turning placement decisions over to standardized instruments and machines diminishes the professional authority of community-college faculty (139).
The authors argue that an effective measure of college writing readiness must be based on more than one sample, perhaps a portfolio of different genres; that it must be rated by multiple readers familiar with the curriculum into which the students are to be placed; and that it must be sensitive to the features and needs of the particular student population (140). Two-year institutions may face special challenges because they may not have dedicated writing program administrations and may find themselves reliant on contingent faculty who cannot always engage in the necessary professional development (140-41).
A move to multiple measures would incorporate “‘soft skills’ such as persistence and time management” as well as “situational factors such as financial stability and life challenges” (141). Institutions, however, may resist change because of cost or because of an “institutional culture” uninformed about best practices. In such contexts, the Committee suggests incremental reform, such as considering high-school GPAs or learning-skills inventories (142).
The case study of a multiple-measures model, which examines Highline Community College in Washington state, reports that faculty were able to overcome institutional resistance by collecting data that confirmed findings from the Community College Research Center (CCRC) at Columbia University showing that placement into developmental courses impeded completion of college-level courses (142). Faculty were able to draw on high-school portfolios, GED scores, and scores on other assessment instruments without substantially increasing costs. The expense of a dedicated placement advisor was offset by measurable student success (143).
The Committee presents Directed Self-Placement (DSP), based on a 1998 article by Daniel Royer and Roger Gilles, as “a principle rather than a specific procedure or instrument”: the concept recognizes that well-informed students can make adequate educational choices (143). The authors note many benefits from DSP: increases in student agency, which encourages responsibility and motivation; better attitudes that enhance the learning environment; and especially an opportunity for students to begin to understand what college writing will entail. Further program benefits include opportunities for faculty to reflect on their curricula (144).
Though “[e]mpirical evidence” is “promising,” the authors find only two studies that specifically address DSP models in use at community colleges. These studies note the “unique considerations” confronting open-admissions institutions, such as “limited resources,” diverse student bodies, and prescriptive state mandates (145).
The case study of Mid Michigan Community College, which implemented DSP in 2002, details how the college drew on some of the many options available for a DSP model, including two different reading scores, sample assignments from the three course options, an online survey about students’ own writing and reading backgrounds, and an advisor consultation (146). Completion results improved substantially without major cost increases. The college is now addressing the effects of a growth surge as well as the need for the model to accommodate students with dual-enrollment credits (146-47).
Other possible reforms at some institutions include “first-week ‘diagnostic’ assignments,” “differentiated instruction” allowing students with some degree of college readiness to complete the capstone project in the credit-bearing course; and various options for “challenging” placement, such as submission of portfolios (147-48). The authors caution that students who already understand the institutional culture—possibly “white, middle-class, traditional-aged students”—are the ones most likely to self-advocate through placement challenges (148).
The Committee reports that the nontraditional students and veterans in many two-year-college populations often do not score well on standardized tests and need measures that capture other factors that predict college success, such as “life experiences” (148). Similarly, the diverse two-year student population is best served by measures that recognize students’ abilities to “shuttle among a wealth of languages, linguistic resources, and modalities,” rather than tests that may well over-place students whose strength is grammar knowledge, such as some international students (149).
The Committee recommends that placement procedures should
be grounded in disciplinary knowledge.
be developed by local faculty who are supported professionally.
be sensitive to effects on diverse student populations.
be assessed and validated locally.
be integrated into campus-wide efforts to improve student success. (150-51)
The White Paper provides a substantial Works Cited list as a resource for placement reform.