College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


1 Comment

Anderst et al. Accelerated Learning at a Community College. TETYC Sept. 2016. Posted 10/21/2016.

Anderst, Leah, Jennifer Maloy, and Jed Shahar. “Assessing the Accelerated Learning Program Model for Linguistically Diverse Developmental Writing Students.” Teaching English in the Two-Year College 44.1 (2016): 11-31. Web. 07 Oct. 2016.

Leah Anderst, Jennifer Maloy, and Jed Shahar report on the Accelerated Learning Program (ALP) implemented at Queensborough Community College (QCC), a part of the City University of New York system (CUNY) (11) in spring and fall semesters, 2014 (14).

In the ALP model followed at QCC, students who had “placed into remediation” simultaneously took both an “upper-level developmental writing class” and the “credit-bearing first-year writing course” in the two-course first-year curriculum (11). Both courses were taught by the same instructor, who could develop specific curriculum that incorporated program elements designed to encourage the students to see the links between the classes (13).

The authors discuss two “unique” components of their model. First, QCC students are required to take a high-stakes, timed writing test, the CUNY Assessment Test for Writing (CATW), for placement and to “exit remediation,” thus receiving a passing grade for their developmental course (15). Second, the ALP at Queensborough integrated English language learners (ELLs) with native English speakers (14).

Anderst et al. note research showing that in most institutions, English-as-a-second-language instruction (ESL) usually occurs in programs other than English or writing (14). The authors state that as the proportion of second-language learners increases in higher education, “the structure of writing programs often remains static” (15). Research by Shawna Shapiro, they note, indicates that ELL students benefit from “a non-remedial model” (qtd. in Anderst et al. 15), validating the inclusion of ELL students in the ALP at Queensborough.

Anderst et al. review research on the efficacy of ALP. Crediting Peter Adams with the concept of ALP in 2007 (11), the authors cite Adams’s findings that such programs have had “widespread success” (12), notably in improving “passing rate[s] of basic writing students,” improving retention, and accelerating progress through the first-year curriculum (12). Other research supports the claim that ALP students are more successful in first- and second-semester credit-bearing writing courses than developmental students not involved in such programs. although data on retention are mixed (12).

The authors note research on the drawbacks of high-stakes tests like the required exit-exam at QCC (15-16) but argue that strong student scores on this “non-instructor-based measurement” (26) provided legitimacy for their claims that students benefit from ALPs (16).

The study compared students in the ALP with developmental students not enrolled in the program. English-language learners in the program were compared both with native speakers in the program and with similar ELL students in specialized ESL courses. Students in the ALP classes were compared with the general cohort of students in the credit-bearing course, English 101. Comparisons were based on exit-exam scores and grades (17). Pass rates for the exam were calculated before and after “follow-up workshops” for any developmental student who did not pass the exam on the first attempt (17).

Measured by pass and withdrawal rates, Anderst et al. report, ALP students outperformed students in the regular basic writing course both before and after the workshops, with ELL students in particular succeeding after the follow-up workshops (17-18). They report a fall-semester pass rate of 84.62% for ELL students enrolled in the ALP after the workshop, compared to a pass rate of 43.4% for ELL students not participating in the program (19).

With regard to grades in English 101, the researchers found that for ALP students, the proportion of As was lower than for the course population as a whole (19). However, this difference disappeared “when the ALP cohort’s grades were compared to the non-ALP cohort’s grades with English 101 instructors who taught ALP courses” (19). Anderst et al. argue that comparing grades given to different cohorts by the same instructors is “a clearer measure” of student outcomes (19).

The study also included an online survey students took in the second iteration of the study in fall 2014, once at six weeks and again at fourteen weeks. Responses of students in the college’s “upper-level developmental writing course designed for ESL students” were compared to those of students in the ALP, including ELL students in this cohort (22).

The survey asked about “fit”—whether the course was right for the student—and satisfaction with the developmental course, as well as its value as preparation for the credit-bearing course (22). At six weeks, responses from ALP students to these questions were positive. However, in the later survey, agreement on overall sense of “fit” and the value of the developmental course dropped for the ALP cohort. For students taking the regular ESL course, however, these rates of agreement increased, often by large amounts (23).

Anderst et al. explain these results by positing that at the end of the semester, ALP students, who were concurrently taking English 101, had come to see themselves as “college material” rather than as remedial learners and no longer felt that the developmental course was appropriate for their ability level (25). Students in one class taught by one of the researchers believed that they were “doing just as well, if not better in English 101 as their peers who were not also in the developmental course” (25). The authors consider this shift in ALP students’ perceptions of themselves as capable writers an important argument for ALP and for including ELL students in the program (25).

Anderst et al. note that in some cases, their sample was too small for results to rise to statistical significance, although final numbers did allow such evaluation (18). They also note that the students in the ALP sections whose high-school GPAs were available had higher grades than the “non-ALP” students (20). The ALP cohort included only students “who had only one remedial need in either reading or writing”; students who placed into developmental levels in both areas found the ALP work “too intensive” (28n1).

The authors recommend encouraging more open-ended responses than they received to more accurately account for the decrease in satisfaction in the second survey (26). They conclude that “they could view this as a success” because it indicated the shift in students’ views of themselves:

This may be particularly significant for ELLs within ALP because it positions them both institutionally and psychologically as college writers rather than isolating them within an ESL track. (26)


Del Principe and Ihara. Reading at a Community College. TETYC, Mar. 2016. Posted 04/10/2016.

Del Principe, Annie, and Rachel Ihara. “‘I Bought the Book and I Didn’t Need It’: What Reading Looks Like at an Urban Community College.” Teaching English in the Two-Year College 43.3 (2016): 229-44. Web. 10 Mar. 2016.

Annie Del Principe and Rachel Ihara conducted a qualitative study of student reading practices at Kingsborough Community College, CUNY. They held interviews and gathered course materials from ten students over the span of the students’ time at the college between fall 2011 and fall 2013, amassing “complete records” for five (231). They found a variety of definitions of acceptable reading practices across disciplines; they urge English faculty to recognize this diversity, but they also advocate for more reflection from faculty in all academic subject areas on the purposes of the reading they assign and how reading can be supported at two-year colleges (242).

Four of the five students who were intensively studied placed into regular first-year composition and completed Associates’ degrees while at Kingsborough; the fifth enrolled in a “low-level developmental writing class” and transferred to a physician’s assistant program at a four-year institution in 2015 (232). The researchers’ inquiry covered eighty-three different courses and included twenty-three hours of interviews (232).

The authors’ review of research on reading notes that many different sources across institutions and disciplines see difficulty with reading as a reason that students often struggle in college. The authors recount a widespread perception that poor preparation, especially in high school, and students’ lack of effort is to blame for students’ difficulties but contend that the ways in which faculty frame and use reading also influence how students approach assigned texts (230). Faculty, Del Principe and Ihara write, often do not see teaching reading as part of their job and opt for modes of instruction that convey information in ways that they perceive as efficient, such as lecturing extensively and explaining difficult texts rather than helping students work through them (230).

A 2013 examination of seven community colleges in seven states by the National Center for Education and the Economy (NCEE) reported that the kinds of reading and writing students do in these institutions “are not very cognitively challenging”; don’t require students “to do much” with assigned reading; and demand “performance levels” that are only “modest” (231). This study found that more intensive work on analyzing and reflecting on texts occurred predominately in English classes (231). The authors argue that because community-college faculty are aware of the problems caused by reading difficulties, these faculty are “constantly experimenting” with strategies for addressing these problems; this focus, in the authors’ view, makes community colleges important spaces for investigating reading issues (231).

Del Principe and Ihara note that in scholarship by Linda Adler-Kassner and Heidi Estrem and by David Jolliffe as well as in the report by NCEE, the researchers categorize the kinds of reading students are asked to do in college (232-33). The authors state that their “grounded theory approach” (232) differs from the methods in these works in that they

created categories based on what students said about how they used reading in their classes and what they did (or didn’t do) with the assigned reading rather than on imagined ways of reading or what was ostensibly required by the teacher or by the assignment. (233).

This methodology produced “five themes”:

  • “Supplementing lecture with reading” (233). Students reported this activity in 37% of the courses examined, primarily in non-English courses that depended largely on lecture. Although textbooks were assigned, students received most of the information in lectures but turned to reading to “deepen [their] understanding ” or for help if the lecture proved inadequate in some way (234).
  • “Listening and taking notes as text” (233). This practice, encountered in 35% of the courses, involved situations in which a textbook or other reading was listed on the syllabus but either implicitly or explicitly designated as “optional.” Instructors provided handouts or PowerPoint outlines; students combined these with notes from class to create de facto “texts” on which exams were based. According to Del Principe and Ihara, “This marginalization of long-form reading was pervasive” (235).
  • “Reading to complete a task” (233). In 24% of the courses, students reported using reading for in-class assignments like lab reports or quizzes; in one case, a student described a collaborative group response to quizzes (236). Other activities included homework such as doing math problems. Finally, students used reading to complete research assignments. The authors discovered very little support for or instruction on the use and evaluation of materials incorporated into research projects and posit that much of this reading may have focused on “dubious Internet sources” and may have included cut-and-paste (237).
  • “Analyzing text” (233). Along with “reflecting on text,” below, this activity occurred “almost exclusively” in English classes (238). The authors describe assignments calling for students to attend to a particular line or idea in a text or to compare themes across texts. Students reported finding “on their own” that they had to read more slowly and carefully to complete these tasks (238).
  • “Reflecting on text” (233). Only six of the 83 courses asked students to “respond personally” to reading; only one was not an English course (239). The assignments generally led to class discussion, in which, according to the students, few class members participated, possibly because “Nobody [did] the reading” (student, qtd. in Del Principe and Ihara 239; emendation original).

Del Principe and Ihara focus on the impact of instructors’ “following up” on their assignments with activities that “require[d] students to draw information or ideas directly from their own independent reading” (239). Such follow-up surfaced in only fourteen of the 83 classes studied, with six of the fourteen being English classes. Follow-up in English included informal responses and summaries as well as assigned uses of outside material in longer papers, while in courses other than English, quizzes or exams encouraged reading (240). The authors found that in courses with no follow-up, “students typically did not do the reading” (241).

Del Principe and Ihara acknowledge that composition professionals will find the data “disappointing,” but feel that it’s important not to be misdirected by a “specific disciplinary lens” into dismissing the uses students and other faculty make of different kinds of reading (241). In many classes, they contend, reading serves to back up other kinds of information rather than as the principle focus, as it does in English classes. However, they do ask for more reflection across the curriculum. They note that students are often required to purchase expensive books that are never used. They hope to trigger an “institutional inquiry” that will foster more consideration of how instructors in all fields can encourage the kinds of reading they want students to do (242).


Trimbur, John. Translingualism and Close Reading. CE, Jan. 2016. Posted 01/30/2016.

Trimbur, John. “Translingualism and Close Reading.” College English 78.3 (2016): 219-27. Print.
The January 2016 issue of College English addresses the question of “translingualism,” a term that Min-Zhan Lu and Bruce Horner, in their Introduction to the issue, see as “one possible entry point” for overcoming the perception that there is only a single form of English that is universally standard and acceptable (207). They discuss at length the challenges of defining translingualism, presenting it in part as the recognition that difference in language use is not just a phenomenon of L2 learning but rather is a feature of “the normal transactions of daily communicative practice of ordinary people” (212).
In this issue, John Trimbur “traces a branch of translingualism to its source” (220). He focuses on texts by Mina Shaughnessy, David Bartholomae, Bruce Horner, and Min-Zhan Lu. He locates the origin of this translingual impulse in the evolution of open admissions at the City University of New York (CUNY) in the 1960s and 1970s, as writing teachers confronted evidence that the edifice of “monolingualism” in English was an ideology of exclusion rather than a fact.
Trimbur argues that, far from being an accurate description of United States English prior to the turmoil of the 1960s, “monolingualism is not a possible linguistic condition at all” (220). He contends that all speakers move among various dialects and registers; the heterogeneous voices that are now becoming more audible demonstrate the existence of “a plurilingual periphery within the Anglophone centers” such as London and New York (219; emphasis original).
Trimbur recounts the history of CUNY from its birth in 1847 as the Free Academy, documenting that despite initiatives such as Search for Excellence, Elevation, and Knowledge (SEEK), the system remained largely White (220). In the late 1960s, demands from groups like the Black and Puerto Rican Student Community (BPRSC), in concert with growing civil-rights activism, pressured CUNY administrators to establish a true open-admissions policy (221). This shift introduced writing teachers to student writing that many considered worthy only of “eradicat[ion]” (221). In this new environment, Trimbur writes, “literature MAs and PhDs,” among them Mina Shaughnessy, began to draw on their expertise in New Critical close reading “to find order . . . in the language differences of students formerly excluded by selective admissions” (221).
Trimbur contrasts Shaughnessy’s work to understand the logic behind apparently anomalous usage with the approach of Bartholomae, one of the scholars Trimbur designates as members of the “Pitt school” (222). These scholars, Trimbur writes, recognized that literary theorists routinely constructed meaning from arcane texts by literary authors such as Donald Barthelme or e. e. cummings; the Pitt school critics “placed an extraordinary pressure on themselves” to apply these same approaches to student writing in order to understand “evidence of intention” (222).
To clarify this contrast, Trimbur hypothesizes Bartholomae’s response to an example of student writing addressed by Shaughnessy in her 1977 Errors and Expectations: A Guide to the Teacher of Basic Writing. Whereas Shaughnessy argued for “a logic of nonstandard English” in the essay by noting its use of the conventions of an “evangelical sermon,” Trimbur posits that Bartholomae would see the student practicing rhetorical strategies that positioned him as an applicant to academic authority, such as “moves up and down the ladder of abstraction” from concepts to examples and a gesture toward academic citation (223). In Trimbur’s view, Bartholomae would interpret this student’s effort as a sign not of a writer unable to abandon his “home language” bur rather as a writer “activated by his intention to ‘invent the university'” (223).
Trimbur then compares Bartholomae’s contribution to the approaches of Horner and Lu. Though he contends that both Horner and Bartholomae viewed language difference as socially and historically constructed (220), he contends that Bartholomae maintained in some part a view of standard English as a destination toward which students evolved, somewhat as an L2 learner might move toward a “target language” (224). In Trimbur’s contrast, Horner develops a “dialectical and resolutely social sense of error” in which editing becomes a “negotiation in situations of unequal power and authority”; in this view, teachers might look at student writing “not just for its errors but for the possible rhetorical effects of its language differences” (224).
Trimbur argues that Min-Zhan Lu further complicates the idea of a monolingual center for English by challenging the integrity of standard usage itself. In Lu’s view, Trimbur states, supposedly standard language is inherently “unstable, fluctuating, and hybrid” (225). The resistance of monolingual ideologies to the unconventional and different is the product of a “struggle among conflicting discourses with unequal sociopolitical power” (224-25). In this view, linguistic hierarchies become “momentary hegemon[ies}” (225), within which close reading can locate the value of elided difference.
Trimbur sees an important benefit in such approaches to student writing in their power to bring basic and second-language writing in from “the margins,” where they have been “orbiting around the mainstream English at the center in first-year composition” (226). He calls on composition to cease seeing difference as a reason to isolate the unacceptable but rather to recognize the degree to which difference actually inhabits all language use, thus “dismant[ing] these divisions and the pernicious judgments about language differences and about the differences between people that they have rested on” (226).