College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Nazzal et al. Curriculum for Targeted Instruction at a Community College. TETYC, Mar. 2020. Posted 06/11/2020.

Nazzal, Jane S., Carol Booth Olson, and Huy Q. Chung. “Differences in Academic Writing across Four Levels of Community College Composition Courses.” Teaching English in the Two-Year College 47.3 (2020): 263-96. Print.

Jane S. Nazzal, Carol Booth Olson, and Huy Q. Chung present an assessment tool to help writing educators design curriculum during a shift from faculty-scored placement exams and developmental or “precollegiate” college courses (263) to what they see as common reform options (264-65, 272).

These options, they write, often include directed self-placement (DSP), while preliminary courses designed for students who might struggle with “transfer-level” courses are often replaced with two college-level courses, one with an a concurrent support addition for students who feel they need extra help, and one without (265). At the authors’ institution, “a large urban community college in California” with an enrollment of 50,000 that is largely Hispanic and Asian, faculty-scored exams placed 15% of the students into the transfer-level course; after the implementation of DSP, 73% chose the transfer course, 12% the course with support, and the remaining 15% the precollegiate courses (272).

The transition to DSP and away from precollegiate options, according to Nazzal et al., resulted from a shift away from “access” afforded by curricula intended to help underprepared students toward widespread emphasis on persistence and time to completion (263). The authors cite scholarship contending that processes that placed students according to faculty-scored assessments incorrectly placed one-third to one-half of students and disparately affected minority students; fewer than half of students placed into precollegiate courses reach the transfer-level course (264).

In the authors’ view, the shift to DSP as a solution for these problems creates its own challenges. They contend that valuable information about student writing disappears when faculty no longer participate in placement processes (264). Moreover, they question the reliability of high-school grades for student decisions, arguing that high school curriculum is often short on writing (265). They cite “burden-shifting” when the responsibility for making good choices is passed to students who may have incomplete information and little experience with college work (266). Noting as well that lower income students may opt for the unsupported transfer course because of the time pressure of their work and home lives, the authors see a need for research on how to address the specific situations of students who opt out of support they may need (266-67).

The study implemented by Nazzal et al. attempts to identify these specific areas that affect student success in college writing in order to facilitate “explicit teaching” and “targeted instruction” (267). They believe that their process identifies features of successful writing that are largely missing from the work of inexperienced writers but that can be taught (268).

The authors review cognitive research on the differences between experienced and novice writers, identifying areas like “Writing Objectives,” “Revision,” and “Sense of Audience” (269-70). They present “[f]oundational [r]esearch” that compares the “writer-based prose” of inexpert writers with the “reader-based prose” of experts (271), as well as the whole-essay conceptualization of successful writers versus the piecemeal approach of novices, among other differentiating features (269).

The study was implemented during the first two weeks of class over two semesters, with eight participating faculty teaching thirteen sections. Two hundred twenty-five students from three precollegiate levels and the single transfer-level course completed the tasks. The study essays were similar to the standard college placement essays taken by most of the students in that they were timed responses to prompts, but for the study, students were asked to read two pieces and “interpret, and synthesize” them in their responses (272-73). One piece was a biographical excerpt (Harriet Tubman or Louie Zamperini, war hero) and the other a “shorter, nonfiction article outlining particular character qualities or traits,” one discussing leadership and the other resilience (274). The prompts asked students to choose a single trait exhibited by the subject that most contributed to his or her success (274).

In the first of two 45-minute sessions, teachers read the pieces aloud while students followed along, then gave preliminary guidance using a graphical organizer. In the second session, students wrote their essays. The essays were rated by experienced writing instructors trained in scoring, using criteria for “high-school writing competency” based on principles established by mainstream composition assessment models (273-74).

Using “several passes through the data,” the lead researcher examined a subset of 76 papers that covered the full range of scores in order to identify features that were “compared in frequency across levels.” Differences in the frequency of these features were analyzed for statistical significance across the four levels (275). A subsample of 18 high-scoring papers was subsequently analyzed for “distinguishing elements . . . that were not present in lower-scoring papers,” including some features that had not been previously identified (275).

Nine features were compared across the four levels; the authors provide examples of presence versus absence of these features (276-79). Three features differed significantly in their frequency in the transfer-level course versus the precollegiate courses: including a clear claim, responding to the specific directions of the prompt, and referring to the texts (279).

Nazzal et al. also discovered that a quarter of the students placed in the transfer-level course failed to refer to the text, and that only half the students in that course earning passing scores, indicating that they had not incorporated one or more of the important features. They concluded that students at all levels would benefit from a curriculum targeting these moves (281).

Writing that only 9% of the papers scored in the “high” range of 9-12 points, Nazzal et al. present an annotated example of a paper that includes components that “went above and beyond the features that were listed” (281). Four distinctive features of these papers were

(1) a clear claim that is threaded throughout the paper; (2) a claim that is supported by relevant evidence and substantiated with commentary that discusses the significance of the evidence; (3) a conclusion that ties back to the introduction; and (4) a response to all elements of the prompt. (282)

Providing appendices to document their process, Nazzal et al. offer recommendations for specific “writing moves that establish communicative clarity in an academic context” (285). They contend that it is possible to identify and teach the moves necessary for students to succeed in college writing. In their view, their identification of differences in the writing of students entering college with different levels of proficiency suggests specific candidates for the kind of targeted instruction that can help all students succeed.


Leave a comment

Bunch, George C. “Metagenres” as an Analytical Tool at Two-Year Colleges. TETYC, Dec. 2019. Posted 02/24/2020.

Bunch, George C. “Preparing the ‘New Mainstream’ for College and Careers: Academic and Professional Metagenres in Community Colleges.” Teaching English in the Two-Year College 47.2 (2019): 168-94. Print.

George C. Bunch, describing himself as a “relative ‘outsider’” who has been studying English learners and the “policies and practices” affecting their experiences as they enter and move on from community colleges (190n1), writes about the need for frameworks that can guide curricular choices for the “New Mainstream,” the students with diverse backgrounds and varied educational preparation who populate community colleges (169). He suggests attention to “metagenres,” a concept advanced by Michael Carter (171) as an “analytical tool” that can provide insights into the practices that will most benefit these students (170).

Bunch contextualizes his exploration of metagenres by reporting pressure, some from policymakers, to move community-college students more quickly through layers of developmental and English-as-second-language (ESL) coursework. Such acceleration, Bunch suggests, is meant to allow students to move faster into college-level or disciplinary coursework leading to transfer to four-year colleges or to career paths (168).

Bunch reports a study of ten California community colleges he and his team published in 2011. The study revealed contrasting orientations in approaches to developmental writing students. One endorses a skill-based curriculum in which students acquire “the basics” to function as “building blocks” for later more advanced coursework (172). The other promotes curriculum leading to “academic pathways” that encourage “opportunities for language and literacy development and support in the context of students’ actual progression toward academic and professional goals” (172). Bunch contends that in neither case did his team find adequate discussions of “the language and literacy demands of academic work beyond ESL, developmental English, and college-level composition courses” (173; emphasis original).

Bunch writes that scholarship on the role of writing instruction as students prepare for specific professional goals follows two divergent trends. One approach assumes that literacy instruction should promote a universal set of “generalist” competencies and that writing teachers’ “professional qualifications and experience” make them best qualified to teach these practices (173). Bunch points to the “Framework for Success in Postsecondary Writing” developed by the Council of Writing Program Administrators, the National Council of Teachers of English, and the National Writing Project, as well as work by Kathleen Blake Yancey, as exemplifying this approach (173-74).

At the same time, he notes, the later “WPA Outcomes Statement” illustrates a focus on the specific rhetorical demands of the disciplines students are likely to take up beyond English, asking, he writes, for “guidance” from disciplinary faculty and hoping for “share[d] responsibility” across campuses as students negotiate more targeted coursework (174). Bunch expresses concern, however, that faculty in the disciplines have “rarely reflected on those [literacy practices] explicitly” and tend to assume that students should master language use prior to entering their fields (174).

Bunch suggests that the concept of metagenres can supply analysis that affords a “grain size” between “macro approaches” that posit a single set of criteria for all writing regardless of its purpose and audience, and a “micro-level” approach that attempts to parse the complex nuances of the many different career options community-college students might pursue (175).

To establish the concept, Carter examined student outcomes at his four-year institution. Defining metagenres as “ways of doing and writing by which individual linguistic acts on the microlevel constitute social formations on the macrolevel” (qtd. in Bunch 176), Carter grouped the courses he studied under four headings:

  • Problem-Solving, most apparent in fields like economics, animal science, business management, and math
  • Empirical Inquiry, which he located in natural and social sciences
  • Research from Sources, visible in the humanities, for example history
  • Performance, notably in the fine arts but also in writing coursework (176)

Bunch notes that in some cases, the expected definitional boundaries required negotiation: e.g., psychology, though possibly an empirical discipline, fit more closely under problem-solving in the particular program Carter analyzed (176-77).

Bunch offers potential applications at the levels of ESL/developmental/composition coursework, “[w]riting across and within the disciplines,” “[c]ollege-level coursework in other disciplines,” and “[i]nstitution-wide reform” (177-79). For example, writing students might use the metagenre concept to examine and classify the writing they do in their other courses (178), or faculty might open conversations about how students might be able to experience discipline-specific work even while developing their language skills (179). Institutions might reconsider what Thomas Bailey et al. call the “cafeteria model” of course selection and move toward “guided pathways” that define coherent learning goals tied to students’ actual intentions (179).

Bunch and his group considered coursework in nine programs at a “small community college in the San Francisco Bay Area” that is designated a Hispanic-Serving Institution (180). In selecting programs, he looked for a range across both traditional academic areas and career-oriented paths, as well as for coursework in which minority and underprepared or minority-language students often enrolled (180-81). Primary data came from course descriptions at both class- and program-levels, but Bunch also drew on conversations with members of the community-college community (180).

He writes that “the notion of metagenres” was “useful for comparing and contrasting the ‘ways of doing’ associated with academic and professional programs” (181). He writes that history, fashion design, and earth science (meteorology and geology) could be classified as “research from sources,” “performance,” and “empirical inquiry,” respectively (182-83). Other courses were more complex in their assignments and outcomes, with allied health exhibiting both problem-solving and empirical inquiry and early childhood education combining performance and problem-solving (183-86).

Bunch states that applying the metagenre concept is limited by the quality of information available as well as the likelihood that it cannot subsume all subdisciplines, and suggests more research, including classroom observation as well as examination of actual student writing (186). He cites other examinations of genre as a means of situating student learning, acknowledging the danger of too narrow a focus on particular genres at the expense of attention to the practices of “individuals who use them” (187). However, in his view, the broader analytical potential of the metagenre frame encourages conversations among faculty who may not have considered the nuances of their particular literacy demands and attention to writing as part of students’ progression into specific academic and career paths rather than as an isolated early activity (174). He posits that, rather than trying to detail the demands of any given genre as students enter the college environment, institutions might focus on helping students understand and apply the “concept of metagenre” as a way of making sense of the rhetorical situations they might enter (189; emphasis original).

Ultimately, in his view, the concept can aid in

providing more specific guidance than afforded by the kinds of general academic literacy competencies often assigned to the composition profession, yet remaining broader than a focus on the individual oral and written genres of every conceivable subdiscipline and subfield. (189).


Leave a comment

Jensen and Ely. An “Externship” for Teaching at Two-Year Colleges. TETYC, Mar. 2017. Posted 04/06/2017.

Jensen, Darin, and Susan Ely. “A Partnership Teaching Externship Program: A Model That Makes Do.” Teaching English in the Two-Year College 44.3 (2017): 247-63. Web. 26 Mar. 2017.

Darin Jensen and Susan Ely describe a program to address the dearth of writing instructors prepared to meet the needs of community-college students. This program, an “externship,” was developed by the authors as an arrangement between Metropolitan Community College in Omaha, Nebraska (MCC), and the University of Nebraska at Omaha (UNO) (247).

The authors write that as full-time faculty at MCC, they were expected to teach developmental writing but that neither had training in either basic-writing instruction or in working with community-college populations (247). When Ely became coordinator of basic writing, she found that while she could hire instructors with knowledge of first-year writing, the pool of instructors adequately prepared to teach in the particular context of community colleges “did not exist” (248).

This dearth was especially concerning because, according to a 2015 Fact Sheet from the American Association of Community Colleges, 46% of entering students attend community colleges, while a 2013 report from the National Conference of State Legislatures notes that more than 50% of these students enroll in remedial coursework (250). Community colleges also serve the “largest portion” of minority, first-generation, and low-income students (250-51).

Jensen and Ely attribute much of this lack of preparation for teaching developmental writing to the nature of graduate training; they quote a 2014 report from the Modern Language Association that characterizes graduate education as privileging the “‘narrow replication’ of scholars” at the expense, in the authors’ words, of “more substantive training in teaching” (249). Such a disconnect, the authors contend, disadvantages both the undergraduate students who need instructors versed in basic writing and the graduating literacy professionals who lack the preparation for teaching that will ensure them full-time employment (248). They quote Ellen Andrews Knodt to note that the emphasis on teaching needed to serve community-college students suffers “almost by definition” from an “inferior status” (qtd. in Jensen and Ely 249).

Jensen and Ely’s research documents a lack of attention to teacher preparation even among resources dedicated to community colleges and basic writing. Holly Hassel’s 2013 examination of Teaching English in the Two-Year College from 2001 to 2012 found only “8 of 239 articles” that addressed teacher preparation (249). In 2006, Barbara Gleason “found fewer than twenty graduate courses in teaching basic writing across the country” (250). The authors found only one issue of TETYC, in March 2001, dealing with teacher preparation, and Gleason found only two issues of the Journal of Basic Writing, from 1981 and 1984, that focused primarily on professional development for teaching this student population (250).

Given these findings and their own experiences, Jensen and Ely designed a program that would be “activist in nature” (248), committed to the idea, drawn from Patrick Sullivan, that community-college teaching participates in “the noble work of democratizing American higher education” (249).

Jensen and Ely chose Gregory Cowan’s 1971 term “externship” over “apprenticeship” because of the latter’s “problematic hierarchical nature” (251). They abandoned a preliminary internship model because the graduate students were “not really interns, but were student teachers” and did not produce traditional papers (251). Subsequent iterations were structured as independent studies under Dr. Tammie Kennedy at UNO (251).

The authors explain that neither institution fully supported the project, at least partly, they believe, because the “low value” of community-college teaching makes it “a hard sell” (252). Dr. Kennedy earned no compensation and had no clear understanding of how the work counted in her career advancement (251-52). The authors received no reassigned time and only a $500 stipend. They emphasize that these conditions “demonstrate the difficult realities” of the kind of change they hoped to encourage (252).

Students in the program committed to eighty hours of work during a spring semester, including readings, partnering on syllabus and course design, student-teaching in every community-college course meeting, participating in planning and reflections before and after the classes, and attending a collaborative grading session (252). The externship went far beyond what the authors consider typical practica for teaching assistants; it more nearly resembled the K-12 preservice model, “provid[ing] guided practice and side-by-side mentoring for the novice teacher,” as well as extensive exposure to theoretical work in serving community-college populations (252). The graduate students developed a teaching portfolio, a teaching philosophy for the community-college environment, and a revised CV (251).

The authors share their reading lists, beginning with Mike Rose’s Lives on the Boundary and Burton R. Clark’s “The ‘Cooling-Out’ Function in Higher Education,” which they value for its “counterpoint to the promise of developmental education in Rose’s books” (252). Works by Ilona Leki, Dana Ferris, and Ann Johns added insight into ESL students, while Adrienne Rich’s “Teaching Language in Open Admissions” spoke to the needs of first-generation students (253). The authors drew from Susan Naomi Bernstein’s Teaching Developmental Writing in the first year; readings on the politics of remediation came from Mary Soliday and Patrick Finn ((253).

The program emphasized course design beyond the bare introduction offered in the graduate practicum. Themed courses using “an integrated reading and writing model” involved “vocabulary acquisition, close reading, summary, explicit instruction, and discussion” (254). Jensen and Ely stress the importance of “writ[ing] with our students” and choosing texts, often narratives rather than non-fiction, based on the need to engage their particular population (255).

Another important component was the shared grading process that allowed both the authors and the graduate students to discuss and reflect on the outcomes and priorities for community-college education (255). The authors “eschew[ed] skill and drill pedagogy,” focusing on “grammar in the context of writing increasingly complex summaries and responses” (255). Though they state that the time commitment in such sessions makes them impractical “on a regular basis,” they value them as “an intense relational experience” (255).

Throughout, the authors emphasize that working with the graduate students to refine pedagogy for the community college allowed them to reflect on and develop their own theoretical understanding and teaching processes (254, 255).

The graduate students participated in interviews in which they articulated a positive response to the program (256). The authors report that while the four students in their first two years constitute too small a sample for generalization, the program contributed to success in finding full-time employment (257).

Jensen and Ely conclude that the current structure of higher education and the low regard for teaching make it unlikely that programs like theirs will be easy to establish and maintain. Yet, they note, the knowledge and professional development that will enable community-college teachers to meet the demands forced on them by the “persistence and completion” agenda can only come from adequately supported programs that offer

a serious and needed reform for the gross lack of training that universities provide to graduate students, many of whom will go on to become community college instructors. 257


1 Comment

Anderst et al. Accelerated Learning at a Community College. TETYC Sept. 2016. Posted 10/21/2016.

Anderst, Leah, Jennifer Maloy, and Jed Shahar. “Assessing the Accelerated Learning Program Model for Linguistically Diverse Developmental Writing Students.” Teaching English in the Two-Year College 44.1 (2016): 11-31. Web. 07 Oct. 2016.

Leah Anderst, Jennifer Maloy, and Jed Shahar report on the Accelerated Learning Program (ALP) implemented at Queensborough Community College (QCC), a part of the City University of New York system (CUNY) (11) in spring and fall semesters, 2014 (14).

In the ALP model followed at QCC, students who had “placed into remediation” simultaneously took both an “upper-level developmental writing class” and the “credit-bearing first-year writing course” in the two-course first-year curriculum (11). Both courses were taught by the same instructor, who could develop specific curriculum that incorporated program elements designed to encourage the students to see the links between the classes (13).

The authors discuss two “unique” components of their model. First, QCC students are required to take a high-stakes, timed writing test, the CUNY Assessment Test for Writing (CATW), for placement and to “exit remediation,” thus receiving a passing grade for their developmental course (15). Second, the ALP at Queensborough integrated English language learners (ELLs) with native English speakers (14).

Anderst et al. note research showing that in most institutions, English-as-a-second-language instruction (ESL) usually occurs in programs other than English or writing (14). The authors state that as the proportion of second-language learners increases in higher education, “the structure of writing programs often remains static” (15). Research by Shawna Shapiro, they note, indicates that ELL students benefit from “a non-remedial model” (qtd. in Anderst et al. 15), validating the inclusion of ELL students in the ALP at Queensborough.

Anderst et al. review research on the efficacy of ALP. Crediting Peter Adams with the concept of ALP in 2007 (11), the authors cite Adams’s findings that such programs have had “widespread success” (12), notably in improving “passing rate[s] of basic writing students,” improving retention, and accelerating progress through the first-year curriculum (12). Other research supports the claim that ALP students are more successful in first- and second-semester credit-bearing writing courses than developmental students not involved in such programs. although data on retention are mixed (12).

The authors note research on the drawbacks of high-stakes tests like the required exit-exam at QCC (15-16) but argue that strong student scores on this “non-instructor-based measurement” (26) provided legitimacy for their claims that students benefit from ALPs (16).

The study compared students in the ALP with developmental students not enrolled in the program. English-language learners in the program were compared both with native speakers in the program and with similar ELL students in specialized ESL courses. Students in the ALP classes were compared with the general cohort of students in the credit-bearing course, English 101. Comparisons were based on exit-exam scores and grades (17). Pass rates for the exam were calculated before and after “follow-up workshops” for any developmental student who did not pass the exam on the first attempt (17).

Measured by pass and withdrawal rates, Anderst et al. report, ALP students outperformed students in the regular basic writing course both before and after the workshops, with ELL students in particular succeeding after the follow-up workshops (17-18). They report a fall-semester pass rate of 84.62% for ELL students enrolled in the ALP after the workshop, compared to a pass rate of 43.4% for ELL students not participating in the program (19).

With regard to grades in English 101, the researchers found that for ALP students, the proportion of As was lower than for the course population as a whole (19). However, this difference disappeared “when the ALP cohort’s grades were compared to the non-ALP cohort’s grades with English 101 instructors who taught ALP courses” (19). Anderst et al. argue that comparing grades given to different cohorts by the same instructors is “a clearer measure” of student outcomes (19).

The study also included an online survey students took in the second iteration of the study in fall 2014, once at six weeks and again at fourteen weeks. Responses of students in the college’s “upper-level developmental writing course designed for ESL students” were compared to those of students in the ALP, including ELL students in this cohort (22).

The survey asked about “fit”—whether the course was right for the student—and satisfaction with the developmental course, as well as its value as preparation for the credit-bearing course (22). At six weeks, responses from ALP students to these questions were positive. However, in the later survey, agreement on overall sense of “fit” and the value of the developmental course dropped for the ALP cohort. For students taking the regular ESL course, however, these rates of agreement increased, often by large amounts (23).

Anderst et al. explain these results by positing that at the end of the semester, ALP students, who were concurrently taking English 101, had come to see themselves as “college material” rather than as remedial learners and no longer felt that the developmental course was appropriate for their ability level (25). Students in one class taught by one of the researchers believed that they were “doing just as well, if not better in English 101 as their peers who were not also in the developmental course” (25). The authors consider this shift in ALP students’ perceptions of themselves as capable writers an important argument for ALP and for including ELL students in the program (25).

Anderst et al. note that in some cases, their sample was too small for results to rise to statistical significance, although final numbers did allow such evaluation (18). They also note that the students in the ALP sections whose high-school GPAs were available had higher grades than the “non-ALP” students (20). The ALP cohort included only students “who had only one remedial need in either reading or writing”; students who placed into developmental levels in both areas found the ALP work “too intensive” (28n1).

The authors recommend encouraging more open-ended responses than they received to more accurately account for the decrease in satisfaction in the second survey (26). They conclude that “they could view this as a success” because it indicated the shift in students’ views of themselves:

This may be particularly significant for ELLs within ALP because it positions them both institutionally and psychologically as college writers rather than isolating them within an ESL track. (26)


Leave a comment

Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).