College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Estrem et al. “Reclaiming Writing Placement.” WPA, Fall 2018. Posted 12/10/2018.

Estrem, Heidi, Dawn Shepherd, and Samantha Sturman. “Reclaiming Writing Placement.” Journal of the Council of Writing Program Administrators 42.1 (2018): 56-71. Print.

Heidi Estrem, Dawn Shepherd, and Samantha Sturman urge writing program administrators (WPAs) to deal with long-standing issues surrounding the placement of students into first-year writing courses by exploiting “fissures” (60) created by recent reform movements.

The authors note ongoing efforts by WPAs to move away from using single or even multiple test scores to determine which courses and how much “remediation” will best serve students (61). They particularly highlight “directed self-placement” (DSP) as first encouraged by Dan Royer and Roger Gilles in a 1998 article in College Composition and Communication (56). Despite efforts at individual institutions to build on DSP by using multiple measures, holistic as well as numerical, the authors write that “for most college students at most colleges and universities, test-based placement has continued” (57).

Estrem et al. locate this pressure to use test scores in the efforts of groups like Complete College America (CCA) and non-profits like the Bill and Melinda Gates Foundation, which “emphasize efficiency, reduced time to degree, and lower costs for students” (58). The authors contrast this “focus on degree attainment” with the field’s concern about “how to best capture and describe student learning” (61).

Despite these different goals, Estrem et al. recognize the problems caused by requiring students to take non-credit-bearing courses that do not address their actual learning needs (59). They urge cooperation, even if it is “uneasy,” with reform groups in order to advance improvements in the kinds of courses available to entering students (58). In their view, the impetus to reduce “remedial” coursework opens the door to advocacy for the kinds of changes writing professionals have long seen as serious solutions. Their article recounts one such effort in Idaho to use the mandate to end remediation as it is usually defined and replace it with a more effective placement model (60).

The authors note that CCA calls for several “game changers” in student progress to degree. Among these are the use of more “corequisite” courses, in which students can earn credit for supplemental work, and “multiple measures” (59, 61). Estrem et al. find that calls for these game changers open the door for writing professionals to introduce innovative courses and options, using evidence that they succeed in improving student performance and retention, and to redefine “multiple measures” to include evidence such as portfolio submissions (60-61).

Moreover, Estrem et al. find three ways in which WPAs can respond to specific calls from reform movements in ways that enhance student success. First, they can move to create new placement processes that enable students to pass their first-year courses more consistently, thus responding to concerns about costs to students (62); second, they can provide data on increased retention, which speaks to time to degree; and finally, they can recognize a current “vacuum” in the “placement test market” (62-63). They note that ACT’s Compass is no longer on the market; with fewer choices, institutions may be open to new models. The authors contend that these pressures were not as exigent when directed self-placement was first promoted. The existence of such new contexts, they argue, provides important and possibly short-lived opportunities (63).

The authors note the growing movement to provide college courses to students while they are in high school (62). Despite the existence of this model for lowering the cost and time to degree, Estrem et al. argue that the first-year experience is central to student success in college regardless of students’ level when they enter, and that placing students accurately during this first college exposure can have long-lasting effects (63).

Acknowledging that individual institutions must develop tools that work in their specific contexts, Estrem et al. present “The Write Class,” their new placement tool. The Write Class is “a web application that uses an algorithm to match students with a course based on the information they provide” (64). Students are asked a set of questions, beginning with demographics. A “second phase,” similar to that in Royer and Gilles’s original model, asks for “reflection” on students’ reading and writing habits and attitudes, encouraging, among other results, student “metaawareness” about their own literacy practices (65).

The third phase provides extensive information about the three credit-bearing courses available to entering students: the regular first-year course in which most students enroll; a version of this course with an additional workshop hour with the instructor in a small group setting; or a second-semester research-based course (64). The authors note that the courses are given generic names, such as “Course A,” to encourage students to choose based on the actual course materials and their self-analysis rather than a desire to get into or dodge specific courses (65).

Finally, students are asked to take into account “the context of their upcoming semester,” including the demands they expect from family and jobs (65). With these data, the program advises students on a “primary and secondary placement,” for some including the option to bypass the research course through test scores and other data (66).

In the authors’ view, the process has a number of additional benefits that contribute to student success. Importantly, they write, the faculty are able to reach students prior to enrollment and orientation rather than find themselves forced to deal with placement issues after classes have started (66). Further, they can “control the content and the messaging that students receive” regarding the writing program and can respond to concerns across campus (67). The process makes it possible to have “meaningful conversation[s]” with students who may be concerned about their placement results; in addition, access to the data provided by the application allows the WPAs to make necessary adjustments (67-68).

Overall, the authors present a student’s encounter with their placement process as “a pedagogical moment” (66), in which the focus moves from “getting things out of the way” to “starting a conversation about college-level work and what it means to be a college student” (68). This shift, they argue, became possible through rhetorically savvy conversations that took advantage of calls for reform; by “demonstrating how [The Write Class process] aligned with this larger conversation,” the authors were able to persuade administrators to adopt the kinds of concrete changes WPAs and writing scholars have long advocated (66).


Leave a comment

Klausman et al. TYCA White Paper on Placement Reform. TETYC, Dec. 2o16. Posted 01/28/2017.

Klausman, Jeffrey, Christie Toth, Wendy Swyt, Brett Griffiths, Patrick Sullivan, Anthony Warnke, Amy L. Williams, Joanne Giordano, and Leslie Roberts. “TYCA White Paper on Placement Reform.” Teaching English in the Two-Year College 44.2 (2016): 135-57. Web. 19 Jan. 2017.

Jeffrey Klausman, Christie Toth, Wendy Swyt, Brett Griffiths, Patrick Sullivan, Anthony Warnke, Amy L. Williams, Joanne Giordano, and Leslie Roberts, as members of the Two-Year College Association (TYCA) Research Committee, present a White Paper on Placement Reform. They review current scholarship on placement and present two case studies of two-year colleges that have implemented specific placement models: multiple measures to determine readiness for college-level writing and directed self-placement (DSP) (136).

The authors locate their study in a current moment characterized by a “completion agenda,” which sees as a major goal improving student progress toward graduation, with an increased focus on the role of two-year colleges (136-37). This goal has been furthered by faculty-driven initiatives such as Accelerated Learning Programs but has also been taken on by foundations and state legislatures, whose approach to writing instruction may or may not accord with scholarship on best practices (137). All such efforts to ensure student progress work toward “remov[ing] obstacles” that impede completion, “especially ‘under-placement’” in developmental courses (137).

Efforts to improve placement require alternatives to low-cost, widely available, and widely used high-stakes tests, such as COMPASS. Such tests have not only been shown to be unable to measure the many factors that affect student success; they have also been shown to discriminate against protected student populations (137). In fact, ACT will no longer use COMPASS after the 2015-2016 academic year (137).

Such tests, however, remain “the most common placement process currently in use at two-year colleges” (138); such models are used more frequently at two-year institutions than at four-year ones (139). These models, the Committee reports, also often rely on “Automated Writing Evaluation (AWE) software,” or machine scoring (138). Scholarship has noted that “indirect” measures like standardized tests are poor instruments in placement because they are weak predictors of success and because they cannot be aligned to local curricula and often-diverse local populations (138). Pairing such tests with a writing sample scored with AWE limits assessment to mechanical measures and fails to communicate to students what college writing programs value (138-39).

These processes are especially troublesome at community colleges because of the diverse population at such institutions and because of the particular need at such colleges for faculty who understand the local environment to be involved in designing and assessing the placement process (139). The Committee contends further that turning placement decisions over to standardized instruments and machines diminishes the professional authority of community-college faculty (139).

The authors argue that an effective measure of college writing readiness must be based on more than one sample, perhaps a portfolio of different genres; that it must be rated by multiple readers familiar with the curriculum into which the students are to be placed; and that it must be sensitive to the features and needs of the particular student population (140). Two-year institutions may face special challenges because they may not have dedicated writing program administrations and may find themselves reliant on contingent faculty who cannot always engage in the necessary professional development (140-41).

A move to multiple measures would incorporate “‘soft skills’ such as persistence and time management” as well as “situational factors such as financial stability and life challenges” (141). Institutions, however, may resist change because of cost or because of an “institutional culture” uninformed about best practices. In such contexts, the Committee suggests incremental reform, such as considering high-school GPAs or learning-skills inventories (142).

The case study of a multiple-measures model, which examines Highline Community College in Washington state, reports that faculty were able to overcome institutional resistance by collecting data that confirmed findings from the Community College Research Center (CCRC) at Columbia University showing that placement into developmental courses impeded completion of college-level courses (142). Faculty were able to draw on high-school portfolios, GED scores, and scores on other assessment instruments without substantially increasing costs. The expense of a dedicated placement advisor was offset by measurable student success (143).

The Committee presents Directed Self-Placement (DSP), based on a 1998 article by Daniel Royer and Roger Gilles, as “a principle rather than a specific procedure or instrument”: the concept recognizes that well-informed students can make adequate educational choices (143). The authors note many benefits from DSP: increases in student agency, which encourages responsibility and motivation; better attitudes that enhance the learning environment; and especially an opportunity for students to begin to understand what college writing will entail. Further program benefits include opportunities for faculty to reflect on their curricula (144).

Though “[e]mpirical evidence” is “promising,” the authors find only two studies that specifically address DSP models in use at community colleges. These studies note the “unique considerations” confronting open-admissions institutions, such as “limited resources,” diverse student bodies, and prescriptive state mandates (145).

The case study of Mid Michigan Community College, which implemented DSP in 2002, details how the college drew on some of the many options available for a DSP model, including two different reading scores, sample assignments from the three course options, an online survey about students’ own writing and reading backgrounds, and an advisor consultation (146). Completion results improved substantially without major cost increases. The college is now addressing the effects of a growth surge as well as the need for the model to accommodate students with dual-enrollment credits (146-47).

Other possible reforms at some institutions include “first-week ‘diagnostic’ assignments,” “differentiated instruction” allowing students with some degree of college readiness to complete the capstone project in the credit-bearing course; and various options for “challenging” placement, such as submission of portfolios (147-48). The authors caution that students who already understand the institutional culture—possibly “white, middle-class, traditional-aged students”—are the ones most likely to self-advocate through placement challenges (148).

The Committee reports that the nontraditional students and veterans in many two-year-college populations often do not score well on standardized tests and need measures that capture other factors that predict college success, such as “life experiences” (148). Similarly, the diverse two-year student population is best served by measures that recognize students’ abilities to “shuttle among a wealth of languages, linguistic resources, and modalities,” rather than tests that may well over-place students whose strength is grammar knowledge, such as some international students (149).

The Committee recommends that placement procedures should

  • be grounded in disciplinary knowledge.

  • be developed by local faculty who are supported professionally.

  • be sensitive to effects on diverse student populations.

  • be assessed and validated locally.

  • be integrated into campus-wide efforts to improve student success. (150-51)

The White Paper provides a substantial Works Cited list as a resource for placement reform.


1 Comment

Anderst et al. Accelerated Learning at a Community College. TETYC Sept. 2016. Posted 10/21/2016.

Anderst, Leah, Jennifer Maloy, and Jed Shahar. “Assessing the Accelerated Learning Program Model for Linguistically Diverse Developmental Writing Students.” Teaching English in the Two-Year College 44.1 (2016): 11-31. Web. 07 Oct. 2016.

Leah Anderst, Jennifer Maloy, and Jed Shahar report on the Accelerated Learning Program (ALP) implemented at Queensborough Community College (QCC), a part of the City University of New York system (CUNY) (11) in spring and fall semesters, 2014 (14).

In the ALP model followed at QCC, students who had “placed into remediation” simultaneously took both an “upper-level developmental writing class” and the “credit-bearing first-year writing course” in the two-course first-year curriculum (11). Both courses were taught by the same instructor, who could develop specific curriculum that incorporated program elements designed to encourage the students to see the links between the classes (13).

The authors discuss two “unique” components of their model. First, QCC students are required to take a high-stakes, timed writing test, the CUNY Assessment Test for Writing (CATW), for placement and to “exit remediation,” thus receiving a passing grade for their developmental course (15). Second, the ALP at Queensborough integrated English language learners (ELLs) with native English speakers (14).

Anderst et al. note research showing that in most institutions, English-as-a-second-language instruction (ESL) usually occurs in programs other than English or writing (14). The authors state that as the proportion of second-language learners increases in higher education, “the structure of writing programs often remains static” (15). Research by Shawna Shapiro, they note, indicates that ELL students benefit from “a non-remedial model” (qtd. in Anderst et al. 15), validating the inclusion of ELL students in the ALP at Queensborough.

Anderst et al. review research on the efficacy of ALP. Crediting Peter Adams with the concept of ALP in 2007 (11), the authors cite Adams’s findings that such programs have had “widespread success” (12), notably in improving “passing rate[s] of basic writing students,” improving retention, and accelerating progress through the first-year curriculum (12). Other research supports the claim that ALP students are more successful in first- and second-semester credit-bearing writing courses than developmental students not involved in such programs. although data on retention are mixed (12).

The authors note research on the drawbacks of high-stakes tests like the required exit-exam at QCC (15-16) but argue that strong student scores on this “non-instructor-based measurement” (26) provided legitimacy for their claims that students benefit from ALPs (16).

The study compared students in the ALP with developmental students not enrolled in the program. English-language learners in the program were compared both with native speakers in the program and with similar ELL students in specialized ESL courses. Students in the ALP classes were compared with the general cohort of students in the credit-bearing course, English 101. Comparisons were based on exit-exam scores and grades (17). Pass rates for the exam were calculated before and after “follow-up workshops” for any developmental student who did not pass the exam on the first attempt (17).

Measured by pass and withdrawal rates, Anderst et al. report, ALP students outperformed students in the regular basic writing course both before and after the workshops, with ELL students in particular succeeding after the follow-up workshops (17-18). They report a fall-semester pass rate of 84.62% for ELL students enrolled in the ALP after the workshop, compared to a pass rate of 43.4% for ELL students not participating in the program (19).

With regard to grades in English 101, the researchers found that for ALP students, the proportion of As was lower than for the course population as a whole (19). However, this difference disappeared “when the ALP cohort’s grades were compared to the non-ALP cohort’s grades with English 101 instructors who taught ALP courses” (19). Anderst et al. argue that comparing grades given to different cohorts by the same instructors is “a clearer measure” of student outcomes (19).

The study also included an online survey students took in the second iteration of the study in fall 2014, once at six weeks and again at fourteen weeks. Responses of students in the college’s “upper-level developmental writing course designed for ESL students” were compared to those of students in the ALP, including ELL students in this cohort (22).

The survey asked about “fit”—whether the course was right for the student—and satisfaction with the developmental course, as well as its value as preparation for the credit-bearing course (22). At six weeks, responses from ALP students to these questions were positive. However, in the later survey, agreement on overall sense of “fit” and the value of the developmental course dropped for the ALP cohort. For students taking the regular ESL course, however, these rates of agreement increased, often by large amounts (23).

Anderst et al. explain these results by positing that at the end of the semester, ALP students, who were concurrently taking English 101, had come to see themselves as “college material” rather than as remedial learners and no longer felt that the developmental course was appropriate for their ability level (25). Students in one class taught by one of the researchers believed that they were “doing just as well, if not better in English 101 as their peers who were not also in the developmental course” (25). The authors consider this shift in ALP students’ perceptions of themselves as capable writers an important argument for ALP and for including ELL students in the program (25).

Anderst et al. note that in some cases, their sample was too small for results to rise to statistical significance, although final numbers did allow such evaluation (18). They also note that the students in the ALP sections whose high-school GPAs were available had higher grades than the “non-ALP” students (20). The ALP cohort included only students “who had only one remedial need in either reading or writing”; students who placed into developmental levels in both areas found the ALP work “too intensive” (28n1).

The authors recommend encouraging more open-ended responses than they received to more accurately account for the decrease in satisfaction in the second survey (26). They conclude that “they could view this as a success” because it indicated the shift in students’ views of themselves:

This may be particularly significant for ELLs within ALP because it positions them both institutionally and psychologically as college writers rather than isolating them within an ESL track. (26)


1 Comment

Giordano and Hassel. Developmental Reform and the Two-Year College. TETYC, May 2016. Posted 07/25/2016.

Giordano, Joanne Baird, and Holly Hassel. “Unpredictable Journeys: Academically At-Risk Students, Developmental Education Reform, and the Two-Year College.” Teaching English in the Two-Year College 43.4 (2016): 371-90. Web. 11 July 2016.

Joanne Baird Giordano and Holly Hassel report on a study of thirty-eight underprepared students negotiating the curriculum at a “small midwestern campus” that is part of a “statewide two-year liberal arts institution” (372). The study assessed the placement process, the support systems in place, and the efforts to “accelerate” students from developmental coursework to credit-bearing courses (374). The institution, an open-access venue, accepted 100 percent of applicants in 2014 (372).

Giordano and Hassel position their study in an ongoing conversation about how best to speed up students’ progress through college and improve graduation rates—the “college completion agenda” (371). Expressing concern that some policy decisions involved in these efforts might result from what Martha E. Casazza and Sharon L. Silverman designate as “misunderstood studies of ‘remedial’ student programs” (371), Giordano and Hassel present their study as reinforcing the importance of a robust developmental curriculum within an open-access environment and the necessity for ongoing support outside of regular classwork. They also focus on the degree to which placement procedures, even those using multiple measures, often fail to predict long-term student trajectories (371, 377).

The researchers characterize their institution as offering a “rigorous general-education curriculum” designed to facilitate student transfer to the four-year institutions within the state (372). They note that the two-year institution’s focus on access and its comprehensive placement process, which allows faculty to consider a range of factors such as high school grades, writing samples, and high-school coursework (375), mean that its developmental writing program is more likely to serve underprepared students than is the case at colleges that rely on less varied placement measures such as standardized tests (374). The thirty-eight students in the study all had test scores that would have placed them in multiple developmental sections at many institutions (374).

The institution’s goal is to reduce the amount of time such students spend in developmental curricula while supporting the transition to credit-bearing coursework (373). The writing program offers only one developmental course; after completing this course, students move to a two-course credit-bearing sequence, the second component of which fulfills the core writing requirement for four-year institutions within the state (373-74). A curriculum that features “integrated reading and writing” and a small-group “variable-credit, nondegree studio writing course” that students can take multiple times support students’ progress (373).

Examination of student work in the courses in which they were placed indicates that students were generally placed appropriately (375). Over the next two years, the researchers assessed how well the students’ written work met course outcomes and interviewed instructors about student readiness to move forward. Giordano and Hassel then collected data about the students’ progress in the program over a four-year period (375).

Noting that 74% of the students studied remained in good academic standing after their first year, Giordano and Hassel point out that test scores bore no visible relation to academic success (377). Eighteen of the students completed the second-semester writing course. Acknowledging that this percentage was lower than it would be for students whose test scores did not direct them into developmental classes, the authors argue that this level of success illustrates the value of the developmental coursework they undertook. Whereas policy makers often cite developmental work as an impediment to college completion, Giordano and Hassel argue that this coursework was essential in helping the underprepared students progress; they contend that what prevents many such students from moving more quickly and successfully through college is not having to complete extra coursework but instead “the gradual layering of academic and nonacademic challenges” that confronts these students (377).

The authors present a case study to argue that with ongoing support, a student whose scores predict failure can in fact succeed at college-level work (378-79). More problematic, however, are the outcomes for students who place into more than one developmental course, for example, both writing and math.

For example, only three of twenty-one students placing into more than one developmental section “completed a state system degree of any kind,” but some students in this category did earn credits during the four years of the study (380). The authors conclude from data such as these that the single developmental section of writing along with the studio course allowed the students to succeed where they would ordinarily have failed, but that much more support of different kinds is needed to help them progress into the core curriculum (381).

The authors examined the twenty students who did not complete the core requirement to understand how they “got stuck” in their progress (381). Some students repeatedly attempted the initial credit-bearing course; others avoided taking the core courses, and others could not manage the second, required writing course (382-83). The authors offer “speculat[ion]” that second-language issues may have intervened; they also note that the students did not take the accompanying studio option and their instructors chose a “high-stakes, single-grade essay submission” process rather than requiring a portfolio (382).

In addition, the authors contend, many students struggled with credit-bearing work in all their courses, not just writing and reading (383). Giordano and Hassel argue that more discipline-specific support is needed if students are to transition successfully to the analytical thinking, reading, and writing demanded by credit-bearing courses. They note that one successful strategy undertaken by some students involved “register[ing] in gradually increasing numbers of reading-intensive credits” (384), thus protecting their academic standing while building their skills.

Another case study of a student who successfully negotiated developmental and lower-level credit-bearing work but struggled at higher levels leads Giordano and Hassel to argue that, even though this student ultimately faced suspension, the chance to attend college and acquire credits exemplified the “tremendous growth as a reader, writer, and student” open access permits (384).

The study, the authors maintain, supports the conclusion, first, that the demand from policy-making bodies that the institutions and faculty who serve underprepared students be held accountable for the outcomes of their efforts neglects the fact that these institutions and educators have “the fewest resources and voices of influence in higher education and in the policy-making process” (384). Second, they report data showing that policies that discourage students from taking advantage of developmental work so they can move through coursework more quickly result in higher failure rates (387). Third, Giordano and Hassel argue that directed self-placement is not appropriate for populations like the one served by their institution (387). Finally, they reiterate that the value of attending college cannot be measured strictly by graduation rates; the personal growth such experiences offer should be an essential component of any evaluation (387-88).


Leave a comment

Addison, Joanne. Common Core in College Classrooms. Journal of Writing Assessment, Nov. 2015. Posted 12/03/2015.

Addison, Joanne. “Shifting the Locus of Control: Why the Common Core State Standards and Emerging Standardized Tests May Reshape College Writing Classrooms.” Journal of Writing Assessment 8.1 (2015): 1-11. Web. 20 Nov. 2015.

Joanne Addison offers a detailed account of moves by testing companies and philanthropists to extend the influence of the Common Core State Standards Initiative (CCSSI) to higher education. Addison reports that these entities are building “networks of influence” (1) that will shift agency from teachers and local institutions to corporate interests. She urges writing professionals to pay close attention to this movement and to work to retain and restore teacher control over writing instruction.

Addison writes that a number of organizations are attempting to align college writing instruction with the CCSS movement currently garnering attention in K-12 institutions. This alignment, she documents, is proceeding despite criticisms of the Common Core Standards for demanding skills that are “not developmentally appropriate,” for ignoring crucial issues like “the impact of poverty on educational opportunity,” and for the “massive increase” in investment in and reliance on standardized testing (1). But even if these challenges succeed in scaling back the standards, she contends, too many teachers, textbooks, and educational practices will have been influenced by the CCSSI for its effects to dissipate entirely (1). Control of professional development practices by corporations and specific philanthropies, in particular, will link college writing instruction to the Common Core initiative (2).

Addison connects the investment in the Common Core to the “accountability movement” (2) in which colleges are expected to demonstrate the “value added” by their offerings as students move through their curriculum (5). Of equal concern, in Addison’s view, is the increasing use of standardized test scores in college admissions and placement; she notes, for example, “640 colleges and universities” in her home state of Colorado that have “committed to participate” in the Partnership for Assessment of Readiness for College and Career (PARCC) by using standardized tests created by the organization in admissions and placement; she points to an additional 200 institutions that have agreed to use a test generated by the Smarter Balanced Assessment Consortium (SBAC) (2).

In her view, such commitments are problematic not only because they use single-measure tools rather than more comprehensive, pedagogically sound decision-making protocols but also because they result from the efforts of groups like the English Language Arts Work Group for CCSSI, the membership of which is composed of executives from testing companies, supplemented with only one “retired English professor” and “[e]xactly zero practicing teachers” (3).

Addison argues that materials generated by organizations committed to promoting the CCSSI show signs of supplanting more pedagogically sound initiatives like NCTE’s Read-Write-Think program (4). To illustrate how she believes the CCSSI has challenged more legitimate models of professional development, she discusses the relationship between CCSSI-linked coalitions and the National Writing Project.

She writes that in 2011, funds for the National Writing Project were shifted to the president’s Race to the Top (3). Some funding was subsequently restored, but grants from the Bill and Melinda Gates Foundation specifically supported National Writing Project sites that worked with an entity called the Literacy Design Collaborative (LDC) to promote the use of the Common Core Standards in assignment design and to require the use of a “jurying rubric ” intended to measure the fit with the Standards in evaluating student work (National Writing Project, 2014, qtd. in Addison 4). According to Addison, “even the briefest internet search reveals a long list of school districts, nonprofits, unions, and others that advocate the LDC approach to professional development” (4). Addison contends that teachers have had little voice in developing these course-design and assessment tools and are unable, under these protocols, to refine instruction and assessment to fit local needs (4).

Addison expresses further concern about the lack of teacher input in the design, administration, and weight assigned to the standardized testing used to measure “value added” and thus hold teachers and institutions accountable for student success. A number of organizations largely funded by the Bill and Melinda Gates Foundation promote the use of “performance-based” standardized tests given to entering college students and again to seniors (5-6). One such test, the Collegiate Learning Assessment (CLA), is now used by “700 higher education institutions” (5). Addison notes that nine English professors were among the 32 college professors who worked on the development and use of this test; however, all were drawn from “CLA Performance Test Academies” designed to promote the “use of performance-based assessments in the classroom,” and the professors’ specialties were not provided (5-6).

A study conducted using a similar test, the Common Core State Standards Validation Assessment (CCSSAV) indicated that the test did provide some predictive power, but high-school GPA was a better indicator of student success in higher education (6). In all, Addison reports four different studies that similarly found that the predictor of choice was high-school GPA, which, she says, improves on the snapshot of a single moment supplied by a test, instead measuring a range of facets of student abilities and achievements across multiple contexts (6).

Addison attributes much of the movement toward CCSSI-based protocols to the rise of “advocacy philanthropy,” which shifts giving from capital improvements and research to large-scale reform movements (7). While scholars like Cassie Hall see some benefits in this shift, for example in the ability to spotlight “important problems” and “bring key actors together,” concerns, according to Addison’s reading of Hall, include

the lack of external accountability, stifling innovation (and I would add diversity) by offering large-scale, prescriptive grants, and an unprecedented level of influence over state and government policies. (7)

She further cites Hall’s concern that this shift will siphon money from “field-initiated academic research” and will engender “a growing lack of trust in higher education” that will lead to even more restrictions on teacher agency (7).

Addison’s recommendations for addressing the influx of CCSSI-based influences include aggressively questioning our own institutions’ commitments to facets of the initiative, using the “15% guideline” within which states can supplement the Standards, building competing coalitions to advocate for best practices, and engaging in public forums, even where such writing is not recognized in tenure-and-promotion decisions, to “place teachers’ professional judgment at the center of education and help establish them as leaders in assessment” (8). Such efforts, in her view, must serve the effort to identify assessment as a tool for learning rather than control (7-8).

Access this article at http://journalofwritingassessment.org/article.php?article=82


Leave a comment

Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).