College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Nazzal et al. Curriculum for Targeted Instruction at a Community College. TETYC, Mar. 2020. Posted 06/11/2020.

Nazzal, Jane S., Carol Booth Olson, and Huy Q. Chung. “Differences in Academic Writing across Four Levels of Community College Composition Courses.” Teaching English in the Two-Year College 47.3 (2020): 263-96. Print.

Jane S. Nazzal, Carol Booth Olson, and Huy Q. Chung present an assessment tool to help writing educators design curriculum during a shift from faculty-scored placement exams and developmental or “precollegiate” college courses (263) to what they see as common reform options (264-65, 272).

These options, they write, often include directed self-placement (DSP), while preliminary courses designed for students who might struggle with “transfer-level” courses are often replaced with two college-level courses, one with an a concurrent support addition for students who feel they need extra help, and one without (265). At the authors’ institution, “a large urban community college in California” with an enrollment of 50,000 that is largely Hispanic and Asian, faculty-scored exams placed 15% of the students into the transfer-level course; after the implementation of DSP, 73% chose the transfer course, 12% the course with support, and the remaining 15% the precollegiate courses (272).

The transition to DSP and away from precollegiate options, according to Nazzal et al., resulted from a shift away from “access” afforded by curricula intended to help underprepared students toward widespread emphasis on persistence and time to completion (263). The authors cite scholarship contending that processes that placed students according to faculty-scored assessments incorrectly placed one-third to one-half of students and disparately affected minority students; fewer than half of students placed into precollegiate courses reach the transfer-level course (264).

In the authors’ view, the shift to DSP as a solution for these problems creates its own challenges. They contend that valuable information about student writing disappears when faculty no longer participate in placement processes (264). Moreover, they question the reliability of high-school grades for student decisions, arguing that high school curriculum is often short on writing (265). They cite “burden-shifting” when the responsibility for making good choices is passed to students who may have incomplete information and little experience with college work (266). Noting as well that lower income students may opt for the unsupported transfer course because of the time pressure of their work and home lives, the authors see a need for research on how to address the specific situations of students who opt out of support they may need (266-67).

The study implemented by Nazzal et al. attempts to identify these specific areas that affect student success in college writing in order to facilitate “explicit teaching” and “targeted instruction” (267). They believe that their process identifies features of successful writing that are largely missing from the work of inexperienced writers but that can be taught (268).

The authors review cognitive research on the differences between experienced and novice writers, identifying areas like “Writing Objectives,” “Revision,” and “Sense of Audience” (269-70). They present “[f]oundational [r]esearch” that compares the “writer-based prose” of inexpert writers with the “reader-based prose” of experts (271), as well as the whole-essay conceptualization of successful writers versus the piecemeal approach of novices, among other differentiating features (269).

The study was implemented during the first two weeks of class over two semesters, with eight participating faculty teaching thirteen sections. Two hundred twenty-five students from three precollegiate levels and the single transfer-level course completed the tasks. The study essays were similar to the standard college placement essays taken by most of the students in that they were timed responses to prompts, but for the study, students were asked to read two pieces and “interpret, and synthesize” them in their responses (272-73). One piece was a biographical excerpt (Harriet Tubman or Louie Zamperini, war hero) and the other a “shorter, nonfiction article outlining particular character qualities or traits,” one discussing leadership and the other resilience (274). The prompts asked students to choose a single trait exhibited by the subject that most contributed to his or her success (274).

In the first of two 45-minute sessions, teachers read the pieces aloud while students followed along, then gave preliminary guidance using a graphical organizer. In the second session, students wrote their essays. The essays were rated by experienced writing instructors trained in scoring, using criteria for “high-school writing competency” based on principles established by mainstream composition assessment models (273-74).

Using “several passes through the data,” the lead researcher examined a subset of 76 papers that covered the full range of scores in order to identify features that were “compared in frequency across levels.” Differences in the frequency of these features were analyzed for statistical significance across the four levels (275). A subsample of 18 high-scoring papers was subsequently analyzed for “distinguishing elements . . . that were not present in lower-scoring papers,” including some features that had not been previously identified (275).

Nine features were compared across the four levels; the authors provide examples of presence versus absence of these features (276-79). Three features differed significantly in their frequency in the transfer-level course versus the precollegiate courses: including a clear claim, responding to the specific directions of the prompt, and referring to the texts (279).

Nazzal et al. also discovered that a quarter of the students placed in the transfer-level course failed to refer to the text, and that only half the students in that course earning passing scores, indicating that they had not incorporated one or more of the important features. They concluded that students at all levels would benefit from a curriculum targeting these moves (281).

Writing that only 9% of the papers scored in the “high” range of 9-12 points, Nazzal et al. present an annotated example of a paper that includes components that “went above and beyond the features that were listed” (281). Four distinctive features of these papers were

(1) a clear claim that is threaded throughout the paper; (2) a claim that is supported by relevant evidence and substantiated with commentary that discusses the significance of the evidence; (3) a conclusion that ties back to the introduction; and (4) a response to all elements of the prompt. (282)

Providing appendices to document their process, Nazzal et al. offer recommendations for specific “writing moves that establish communicative clarity in an academic context” (285). They contend that it is possible to identify and teach the moves necessary for students to succeed in college writing. In their view, their identification of differences in the writing of students entering college with different levels of proficiency suggests specific candidates for the kind of targeted instruction that can help all students succeed.