College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Jensen and Ely. An “Externship” for Teaching at Two-Year Colleges. TETYC, Mar. 2017. Posted 04/06/2017.

Jensen, Darin, and Susan Ely. “A Partnership Teaching Externship Program: A Model That Makes Do.” Teaching English in the Two-Year College 44.3 (2017): 247-63. Web. 26 Mar. 2017.

Darin Jensen and Susan Ely describe a program to address the dearth of writing instructors prepared to meet the needs of community-college students. This program, an “externship,” was developed by the authors as an arrangement between Metropolitan Community College in Omaha, Nebraska (MCC), and the University of Nebraska at Omaha (UNO) (247).

The authors write that as full-time faculty at MCC, they were expected to teach developmental writing but that neither had training in either basic-writing instruction or in working with community-college populations (247). When Ely became coordinator of basic writing, she found that while she could hire instructors with knowledge of first-year writing, the pool of instructors adequately prepared to teach in the particular context of community colleges “did not exist” (248).

This dearth was especially concerning because, according to a 2015 Fact Sheet from the American Association of Community Colleges, 46% of entering students attend community colleges, while a 2013 report from the National Conference of State Legislatures notes that more than 50% of these students enroll in remedial coursework (250). Community colleges also serve the “largest portion” of minority, first-generation, and low-income students (250-51).

Jensen and Ely attribute much of this lack of preparation for teaching developmental writing to the nature of graduate training; they quote a 2014 report from the Modern Language Association that characterizes graduate education as privileging the “‘narrow replication’ of scholars” at the expense, in the authors’ words, of “more substantive training in teaching” (249). Such a disconnect, the authors contend, disadvantages both the undergraduate students who need instructors versed in basic writing and the graduating literacy professionals who lack the preparation for teaching that will ensure them full-time employment (248). They quote Ellen Andrews Knodt to note that the emphasis on teaching needed to serve community-college students suffers “almost by definition” from an “inferior status” (qtd. in Jensen and Ely 249).

Jensen and Ely’s research documents a lack of attention to teacher preparation even among resources dedicated to community colleges and basic writing. Holly Hassel’s 2013 examination of Teaching English in the Two-Year College from 2001 to 2012 found only “8 of 239 articles” that addressed teacher preparation (249). In 2006, Barbara Gleason “found fewer than twenty graduate courses in teaching basic writing across the country” (250). The authors found only one issue of TETYC, in March 2001, dealing with teacher preparation, and Gleason found only two issues of the Journal of Basic Writing, from 1981 and 1984, that focused primarily on professional development for teaching this student population (250).

Given these findings and their own experiences, Jensen and Ely designed a program that would be “activist in nature” (248), committed to the idea, drawn from Patrick Sullivan, that community-college teaching participates in “the noble work of democratizing American higher education” (249).

Jensen and Ely chose Gregory Cowan’s 1971 term “externship” over “apprenticeship” because of the latter’s “problematic hierarchical nature” (251). They abandoned a preliminary internship model because the graduate students were “not really interns, but were student teachers” and did not produce traditional papers (251). Subsequent iterations were structured as independent studies under Dr. Tammie Kennedy at UNO (251).

The authors explain that neither institution fully supported the project, at least partly, they believe, because the “low value” of community-college teaching makes it “a hard sell” (252). Dr. Kennedy earned no compensation and had no clear understanding of how the work counted in her career advancement (251-52). The authors received no reassigned time and only a $500 stipend. They emphasize that these conditions “demonstrate the difficult realities” of the kind of change they hoped to encourage (252).

Students in the program committed to eighty hours of work during a spring semester, including readings, partnering on syllabus and course design, student-teaching in every community-college course meeting, participating in planning and reflections before and after the classes, and attending a collaborative grading session (252). The externship went far beyond what the authors consider typical practica for teaching assistants; it more nearly resembled the K-12 preservice model, “provid[ing] guided practice and side-by-side mentoring for the novice teacher,” as well as extensive exposure to theoretical work in serving community-college populations (252). The graduate students developed a teaching portfolio, a teaching philosophy for the community-college environment, and a revised CV (251).

The authors share their reading lists, beginning with Mike Rose’s Lives on the Boundary and Burton R. Clark’s “The ‘Cooling-Out’ Function in Higher Education,” which they value for its “counterpoint to the promise of developmental education in Rose’s books” (252). Works by Ilona Leki, Dana Ferris, and Ann Johns added insight into ESL students, while Adrienne Rich’s “Teaching Language in Open Admissions” spoke to the needs of first-generation students (253). The authors drew from Susan Naomi Bernstein’s Teaching Developmental Writing in the first year; readings on the politics of remediation came from Mary Soliday and Patrick Finn ((253).

The program emphasized course design beyond the bare introduction offered in the graduate practicum. Themed courses using “an integrated reading and writing model” involved “vocabulary acquisition, close reading, summary, explicit instruction, and discussion” (254). Jensen and Ely stress the importance of “writ[ing] with our students” and choosing texts, often narratives rather than non-fiction, based on the need to engage their particular population (255).

Another important component was the shared grading process that allowed both the authors and the graduate students to discuss and reflect on the outcomes and priorities for community-college education (255). The authors “eschew[ed] skill and drill pedagogy,” focusing on “grammar in the context of writing increasingly complex summaries and responses” (255). Though they state that the time commitment in such sessions makes them impractical “on a regular basis,” they value them as “an intense relational experience” (255).

Throughout, the authors emphasize that working with the graduate students to refine pedagogy for the community college allowed them to reflect on and develop their own theoretical understanding and teaching processes (254, 255).

The graduate students participated in interviews in which they articulated a positive response to the program (256). The authors report that while the four students in their first two years constitute too small a sample for generalization, the program contributed to success in finding full-time employment (257).

Jensen and Ely conclude that the current structure of higher education and the low regard for teaching make it unlikely that programs like theirs will be easy to establish and maintain. Yet, they note, the knowledge and professional development that will enable community-college teachers to meet the demands forced on them by the “persistence and completion” agenda can only come from adequately supported programs that offer

a serious and needed reform for the gross lack of training that universities provide to graduate students, many of whom will go on to become community college instructors. 257


1 Comment

Anderst et al. Accelerated Learning at a Community College. TETYC Sept. 2016. Posted 10/21/2016.

Anderst, Leah, Jennifer Maloy, and Jed Shahar. “Assessing the Accelerated Learning Program Model for Linguistically Diverse Developmental Writing Students.” Teaching English in the Two-Year College 44.1 (2016): 11-31. Web. 07 Oct. 2016.

Leah Anderst, Jennifer Maloy, and Jed Shahar report on the Accelerated Learning Program (ALP) implemented at Queensborough Community College (QCC), a part of the City University of New York system (CUNY) (11) in spring and fall semesters, 2014 (14).

In the ALP model followed at QCC, students who had “placed into remediation” simultaneously took both an “upper-level developmental writing class” and the “credit-bearing first-year writing course” in the two-course first-year curriculum (11). Both courses were taught by the same instructor, who could develop specific curriculum that incorporated program elements designed to encourage the students to see the links between the classes (13).

The authors discuss two “unique” components of their model. First, QCC students are required to take a high-stakes, timed writing test, the CUNY Assessment Test for Writing (CATW), for placement and to “exit remediation,” thus receiving a passing grade for their developmental course (15). Second, the ALP at Queensborough integrated English language learners (ELLs) with native English speakers (14).

Anderst et al. note research showing that in most institutions, English-as-a-second-language instruction (ESL) usually occurs in programs other than English or writing (14). The authors state that as the proportion of second-language learners increases in higher education, “the structure of writing programs often remains static” (15). Research by Shawna Shapiro, they note, indicates that ELL students benefit from “a non-remedial model” (qtd. in Anderst et al. 15), validating the inclusion of ELL students in the ALP at Queensborough.

Anderst et al. review research on the efficacy of ALP. Crediting Peter Adams with the concept of ALP in 2007 (11), the authors cite Adams’s findings that such programs have had “widespread success” (12), notably in improving “passing rate[s] of basic writing students,” improving retention, and accelerating progress through the first-year curriculum (12). Other research supports the claim that ALP students are more successful in first- and second-semester credit-bearing writing courses than developmental students not involved in such programs. although data on retention are mixed (12).

The authors note research on the drawbacks of high-stakes tests like the required exit-exam at QCC (15-16) but argue that strong student scores on this “non-instructor-based measurement” (26) provided legitimacy for their claims that students benefit from ALPs (16).

The study compared students in the ALP with developmental students not enrolled in the program. English-language learners in the program were compared both with native speakers in the program and with similar ELL students in specialized ESL courses. Students in the ALP classes were compared with the general cohort of students in the credit-bearing course, English 101. Comparisons were based on exit-exam scores and grades (17). Pass rates for the exam were calculated before and after “follow-up workshops” for any developmental student who did not pass the exam on the first attempt (17).

Measured by pass and withdrawal rates, Anderst et al. report, ALP students outperformed students in the regular basic writing course both before and after the workshops, with ELL students in particular succeeding after the follow-up workshops (17-18). They report a fall-semester pass rate of 84.62% for ELL students enrolled in the ALP after the workshop, compared to a pass rate of 43.4% for ELL students not participating in the program (19).

With regard to grades in English 101, the researchers found that for ALP students, the proportion of As was lower than for the course population as a whole (19). However, this difference disappeared “when the ALP cohort’s grades were compared to the non-ALP cohort’s grades with English 101 instructors who taught ALP courses” (19). Anderst et al. argue that comparing grades given to different cohorts by the same instructors is “a clearer measure” of student outcomes (19).

The study also included an online survey students took in the second iteration of the study in fall 2014, once at six weeks and again at fourteen weeks. Responses of students in the college’s “upper-level developmental writing course designed for ESL students” were compared to those of students in the ALP, including ELL students in this cohort (22).

The survey asked about “fit”—whether the course was right for the student—and satisfaction with the developmental course, as well as its value as preparation for the credit-bearing course (22). At six weeks, responses from ALP students to these questions were positive. However, in the later survey, agreement on overall sense of “fit” and the value of the developmental course dropped for the ALP cohort. For students taking the regular ESL course, however, these rates of agreement increased, often by large amounts (23).

Anderst et al. explain these results by positing that at the end of the semester, ALP students, who were concurrently taking English 101, had come to see themselves as “college material” rather than as remedial learners and no longer felt that the developmental course was appropriate for their ability level (25). Students in one class taught by one of the researchers believed that they were “doing just as well, if not better in English 101 as their peers who were not also in the developmental course” (25). The authors consider this shift in ALP students’ perceptions of themselves as capable writers an important argument for ALP and for including ELL students in the program (25).

Anderst et al. note that in some cases, their sample was too small for results to rise to statistical significance, although final numbers did allow such evaluation (18). They also note that the students in the ALP sections whose high-school GPAs were available had higher grades than the “non-ALP” students (20). The ALP cohort included only students “who had only one remedial need in either reading or writing”; students who placed into developmental levels in both areas found the ALP work “too intensive” (28n1).

The authors recommend encouraging more open-ended responses than they received to more accurately account for the decrease in satisfaction in the second survey (26). They conclude that “they could view this as a success” because it indicated the shift in students’ views of themselves:

This may be particularly significant for ELLs within ALP because it positions them both institutionally and psychologically as college writers rather than isolating them within an ESL track. (26)


Leave a comment

Hassel and Giordano. Assessment and Remediation in the Placement Process. CE, Sept. 2015. Posted 10/19/2015.

Hassel, Holly, and Joanne Baird Giordano. “The Blurry Borders of College Writing: Remediation and the Assessment of Student Readiness.” College English 78.1 (2015): 56-80. Print.

Holly Hassel and Joanne Baird Giordano advocate for the use of multiple assessment measures rather than standardized test scores in decisions about placing entering college students in remedial or developmental courses. Their concern results from the “widespread desire” evident in current national conversations to reduce the number of students taking non-credit-bearing courses in preparation for college work (57). While acknowledging the view of critics like Ira Shor that such courses can increase time-to-graduation, they argue that for some students, proper placement into coursework that supplies them with missing components of successful college writing can make the difference between completing a degree and leaving college altogether (61-62).

Sorting students based on their ability to meet academic outcomes, Hassel and Giordano maintain, is inherent in composition as a discipline. What’s needed, they contend, is more comprehensive analysis that can capture the “complicated academic profiles” of individual students, particularly in open-access institutions where students vary widely and where the admissions process has not already identified and acted on predictors of failure (61).

They cite an article from The Chronicle of Higher Education stating that at two-year colleges, “about 60 percent of high-school graduates . . . have to take remedial courses” (Jennifer Gonzalez, qtd. in Hassel and Giordano 57). Similar statistics from other university systems, as well as pushes from organizations like Complete College America to do away with remedial education in the hope of raising graduation rates, lead Hassel and Giordano to argue that better methods are needed to document what competences college writing requires and whether students possess them before placement decisions are made (57). The inability to make accurate decisions affects not only the students, but also the instructors who must alter curriculum to accommodate misplaced students, the support staff who must deal with the disruption to students’ academic progress (57), and ultimately the discipline of composition itself:

Our discipline is also affected negatively by not clearly and accurately identifying what markers of knowledge and skills are required for precollege, first-semester, second-semester, and more advanced writing courses in a consistent way that we can adequately measure. (76)

In the authors’ view, the failure of placement to correctly identify students in need of extra preparation can be largely attributed to the use of “stand-alone” test scores, for example ACT and SAT scores and, in the Wisconsin system where they conducted their research, scores from the Wisconsin English Placement Test (WEPT) (60, 64). They cite data demonstrating that reliance on such single measures is widespread; in Wisconsin, such scores “[h]istorically” drove placement decisions, but concerns about student success and retention led to specific examinations of the placement process. The authors’ pilot process using multiple measures is now in place at nine of the two-year colleges in the system, and the article details a “large-scale scholarship of teaching and learning project , , , to assess the changes to [the] placement process” (62).

The scholarship project comprised two sets of data. The first set involved tracking the records of 911 students, including information about their high school achievements; their test scores; their placement, both recommended and actual; and their grades and academic standing during their first year. The “second prong” was a more detailed examination of the first-year writing and in some cases writing during the second year of fifty-four students who consented to participate. In all, the researchers examined an average of 6.6 pieces of writing per student and a total of 359 samples (62-63). The purpose of this closer study was to determine “whether a student’s placement information accurately and sufficiently allowed that student to be placed into an appropriate first-semester composition course with or without developmental reading and studio writing support” (63).

From their sample, Hassel and Giordano conclude that standardized test scores alone do not provide a usable picture of the abilities students bring to college with regard to such areas as rhetorical knowledge, knowledge of the writing process, familiarity with academic writing, and critical reading skills (66).

To assess each student individually, the researchers considered not just their ACT and WEPT scores and writing samples but also their overall academic success, including “any reflective writing” from instructors, and a survey (66). They note that WEPT scores more often overplaced students, while the ACT underplaced them, although the two tests were “about equally accurate” (66-67).

The authors provide a number of case studies to indicate how relying on test scores alone would misrepresent students’ abilities and specific needs. For example, the “strong high school grades and motivation levels” (68) of one student would have gone unmeasured in an assessment process using only her test scores, which would have placed her in a developmental course. More careful consideration of her materials and history revealed that she could succeed in a credit-bearing first-year writing course if provided with a support course in reading (67). Similarly, a Hmong-speaking student would have been placed into developmental courses based on test-scores alone, which ignored his success in a “challenging senior year curriculum” and the considerable higher-level abilities his actual writing demonstrated (69).

Interventions from the placement team using multiple measures to correct the test-score indications resulted in a 90% success rate. Hassel and Giordano point out that such interventions enabled the students in question to move more quickly toward their degrees (70).

Additional case studies illustrate the effects of overplacement. An online registration system relying on WEPT scores allowed one student to move into a non-developmental course despite his weak preparation in high school and his problematic writing sample; this student left college after his second semester (71-72). Other problems arose because of discrepancies between reading and writing scores. The use of multiple measures permitted the placement team to fine-tune such students’ coursework through detailed analysis of the actual strengths and weaknesses in the writing samples and high-school curricula and grades. In particular, the authors note that students entering college with weak higher-order cognitive and rhetorical skills require extra time to build these abilities; providing this extra time through additional semesters of writing moves students more quickly and reliably toward degree completion than the stress of a single inappropriate course (74-76).

The authors offer four recommendations (78-79): the use of multiple measures, use of assessment data to design a curriculum that meets actual needs; creation of well-thought-out “acceleration” options through pinpointing individual needs; and a commitment to the value of developmental support “for students who truly need it”: “Methods that accelerate or eliminate remediation will not magically make such students prepared for college work” (79).