College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).


2 Comments

Hansen et al. Effectiveness of Dual Credit Courses. WPA Journal, Spring 2015. Posted 08/12/15.

Hansen, Kristine, Brian Jackson, Brett C. McInelly, and Dennis Eggett. “How Do Dual Credit Students Perform on College Writing Tasks After They Arrive on Campus? Empirical Data from a Large-Scale Study.” Journal of the Council of Writing Program Administrators 38.2 (2015): 56-92). Print.

Kristine Hansen, Brian Jackson, Brett C. McInelly, and Dennis Eggett conducted a study at Brigham Young University (BYU) to determine whether students who took a dual-credit/concurrent-enrollment writing course (DC/CE) fared as well on the writing assigned in a subsequent required general-education course as students who took or were taking the university’s first-year-writing course. With few exceptions, Hansen et al. concluded that the students who had taken the earlier courses for their college credit performed similarly to students who had not. However, the study raised questions about the degree to which taking college writing in high school, or for that matter, in any single class, adequately meets the needs of maturing student writers (79).

The exigence for the study was the proliferation of efforts to move college work into high schools, presumably to allow students to graduate faster and thus lower the cost of college, with some jurisdictions allowing students as young as fourteen to earn college credit in high school (58). Local, state, and federal policy makers all support and even “mandate” such opportunities (57), with rhetorical and financial backing from organizations and non-profits promoting college credit as a boon to the overall economy (81). Hansen et al. express concern that no uniform standards or qualifications govern these initiatives (58).

The study examined writing in BYU’s “American Heritage” (AH) course. In this course, which in September 2012 enrolled approximately half of the first-year class, students wrote two 900-word papers involving argument and research. They wrote the first paper in stages with grades and TA feedback throughout, while they relied on peer feedback and their understanding of an effective writing process, which they had presumably learned in the first assignment, for the second paper (64). Hansen et al. provide the prompts for both assignments (84-87).

The study consisted of several components. Students in the AH course were asked to sign a consent form; those who did so were emailed a survey about their prior writing instruction. Of these, 713 took the survey. From these 713 students,189 were selected (60-61). Trained raters using a holistic rubric with a 6-point scale read both essays submitted by these 189 students. The rubric pinpointed seven traits: “thesis, critical awareness, evidence, counter-arguments, organization, grammar and style, sources and citations” (65). A follow-up survey assessed students’ experiences writing the second paper, while focus groups provided additional qualitative information. Hansen et al. note that although only eleven students participated in the focus groups, the discussion provided “valuable insights into students’ motivations for taking pre-college credit options and the learning experiences they had” (65).

The 189 participants fell into five groups: those whose “Path to FYW Credit” consisted of AP scores; those who received credit for a DC/CE option; those planning to take FYW in the future; those taking it concurrently with AH; and those who had taken BYU’s course, many of them in the preceding summer (61, 63). Analysis reveals that the students studied were a good match in such categories as high-school GPA and ACT scores for the full BYU first-year population (62). However, strong high-school GPAs and ACT scores and evidence of regular one-on-one interaction with instructors (71), coupled with the description of BYU as a “private institution” with “very selective admission standards” (63) indicate that the students studied, while coming from many geographic regions, were especially strong students whose experiences could not be generalized to different populations (63, 82).

Qualitative results indicated that, for the small sample of students who participated in the focus group, the need to “get FYW out of the way” was not the main reason for choosing AP or DC/CE options. Rather, the students wanted “a more challenging curriculum” (69). These students reported good teaching practices; in contrast to the larger group taking the earlier survey, who reported writing a variety of papers, the students in the focus group reported a “literature[-]based” curriculum with an emphasis on timed essays and fewer research papers (69). Quotes from the focus-group students who took the FYW course from BYU reveal that they found it “repetitive” and “a good refresher,” not substantially different despite their having reported an emphasis on literary analysis in the high-school courses (72). The students attested that the earlier courses had prepared them well, although some expressed concerns about their comfort coping with various aspects of the first-year experience (71-72).

Three findings invited particular discussion (73):

  • Regardless of the writing instruction they had received, the students differed very little in their performance in the American Heritage class;
  • In general, although their GPAs and test scores indicated that they should be superior writers, the students scored in the center of the 6-point rubric scale, below expectations;
  • Scores were generally higher for the first essay than for the second.

The researchers argue that the first finding does not provide definitive evidence as to whether “FYW even matters” (73). They cite research by numerous scholars that indicates that the immediate effects of a writing experience are difficult to measure because the learning of growing writers does not exhibit a “tidy linear trajectory” (74). The FYW experience may trigger “steps backward” (Nancy Sommers, qtd. in Hansen et al. 72). The accumulation of new knowledge, they posit, can interfere with performance. Therefore, students taking FYW concurrently with AH might have been affected by taking in so much new material (74), while those who had taken the course in the summer had significantly lower GPAs and ACT scores (63). The authors suggest that these factors may have skewed the performance of students with FYW experience.

The second finding, the authors posit, similarly indicates students in the early-to-middle stages of becoming versatile, effective writers across a range of genres. Hansen et al. cite research on the need for a “significant apprenticeship period” in writing maturation (76). Students in their first year of college are only beginning to negotiate this developmental stage.

The third finding may indicate a difference in the demands of the two prompts, a difference in the time and energy students could devote to later assignments, or, the authors suggest, the difference in the feedback built into the two papers (76-77).

Hansen et al. recommend support for the NCTE position that taking a single course, especially at an early developmental stage, does not provide students an adequate opportunity for the kind of sustained practice across multiple genres required for meaningful growth in writing (77-80). Decisions about DC/CE options should be based on individual students’ qualifications (78); programs should work to include additional writing courses in the overall curriculum, designing these courses to allow students to build on skills initiated in AP, DC/CE, and FYW courses (79).

They further recommend that writing programs shift from promising something “new” and “different” to an emphasis on the recursive, nonlinear nature of writing, clarifying to students and other stakeholders the value of ongoing practice (80). Additionally, they recommend attention to the motives and forces of the “growth industry” encouraging the transfer of more and more college credit to high schools (80). The organizations sustaining this industry, they write, hope to foster a more literate, capable workforce. But the authors contend that speeding up and truncating the learning process, particularly with regard to a complex cognitive task like writing, undercut this aim (81-82) and do not, in fact, guarantee faster graduation (79). Finally, citing Richard Haswell, they call for more empirical, replicable studies of phenomena like the effects of DC/CE courses in order to document their impact across broad demographics (82).