College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Leave a comment

Donahue & Foster-Johnson. Text Analysis for Evidence of Transfer. RTE, May 2018. Posted 07/13/2018.

Donahue, Christiane, and Lynn Foster-Johnson. “Liminality and Transition: Text Features in Postsecondary Student Writing.” Research in the Teaching of English 52.4 (2018): 359-381. Web. 4 July 2018.

Christiane Donahue and Lynn Foster-Johnson detail a study of student writing in the “liminal space” between a “generic” first-year-writing course and a second, “discipline-inspired” first-year seminar (365). They see their study as unusual in that it draws its data and conclusions from empirical “corpus analysis” of the texts students produce (376-77). They also present their study as different from much other research in that it considered a “considerably larger” sample that permits them to generalize about the broader population of the specific institution where the study took place (360).

The authors see liminal spaces as appropriate for the study of the issue usually referred to as “transfer,” which they see as a widely shared interest across composition studies (359). They contend that their study of “defined features” in texts produced as students move from one type of writing course to another allows them to identify “just-noticeable difference[s]” that they believe can illuminate how writing develops across contexts (361).

The literature review examines definitions of liminality as well as wide-ranging writing scholarship that attempts to articulate how knowledge created in one context changes as it is applied in new situations. They cite Linda Adler-Kassner’s 2014 contention that students may benefit from “learning strategy rather than specific writing rules or forms,” thus developing the ability to adapt to a range of new contexts (362).

One finding from studies such as that of Lucille McCarthy in 1987 and Donahue in 2010 is that while students change the way they employ knowledge as they move from first to final years of education, they do not seem fully aware of how their application of what they know has changed (361-62). Thus, for Donahue and Foster-Johnson, the actual features detectable in the texts themselves can be illuminating in ways that other research methodologies may not (362, 364).

Examining the many terms that have been used to denote “transfer,” Donahue and Foster-Johnson advocate for “models of writing knowledge reuse” and “adaptation,” which capture the recurrence of specific features and the ways these features may change to serve a new exigency (364).

The study took place in a “selective” institution (366) defined as a “doctoral university of high research activity” (365). The student population is half White, with a diverse range of other ethnicities, and 9% first-generation college students (366). Students take either one or two sections of general first-year writing, depending on needs identified by directed self-placement (366), and a first-year seminar that is “designed to teach first-year writing while also introducing students to a topic in a particular (inter)discipline and gesturing toward disciplinary writing” (365). The authors argue that this sequence provides a revealing “’bridge’ moment in students’ learning” (365).

Students were thus divided into three cohorts depending on which courses they took and in which semester. Ninety percent of the instructors provided materials, collecting “all final submitted drafts of the first and last ‘source-based’ papers” for 883 students. Fifty-two papers from each cohort were randomly chosen, resulting in 156 participants (366-67). Each participating student’s work was examined at four time points, with the intention of identifying the presence or absence of specific features (368).

The features under scrutiny were keyed to faculty-developed learning outcomes for the courses (367-68). The article discusses the analysis of seven: thesis presence, thesis type, introduction type, overall text structure, evidence types, conclusion type, and overall essay purpose (367). Each feature was further broken down into “facets,” 38 in all, that illustrated “the specific aspects of the feature” (367-68).

The authors provide detailed tables of their results and list findings in their text. They report that “the portrait is largely one of stability,” but note students’ ability to vary choices “when needed” (369). Statistically significant differences showing “change[s] across time” ranged from 13% in Cohort 1 to 29% in Cohort 2 and 16% in Cohort 3. An example of a stable strategy is the use of “one explicit thesis at the beginning” of a paper (371); a strategy “rarely” used was “a thesis statement [placed] inductively at the middle or end” (372). Donahue and Foster-Johnson argue that these results indicate that students had learned useful options that they could draw on as needed in different contexts (372).

The authors present a more detailed examination of the relationship between “thesis type” and “overall essay aim” (374). They give examples of strong correlations between, for example, “the purpose of analyzing an object” and the use of “an interpretive thesis” as well as negative correlations between, for example, “the purpose of analyzing an object” and “an evaluative thesis” (374). In their view, these data indicate that some textual features are “congruen[t]” with each other while others are “incompatible” (374). They find that their textual analysis documents these relationships and students’ reliance on them.

They note a “reset effect”: in some cases, students increased their use of a facet (e.g., “external source as authority”) over the course of the first class, but then reverted to using the facet less at the beginning of the second class, only to once again increase their reliance on such strategies as the second class progressed (374-75), becoming, “‘repeating newcomers’ in the second term” (374).

Donahue and Foster-Johnson propose as one explanation for the observed stability the possibility that “more stays consistent across contexts than we might readily acknowledge” (376), or that in general-education contexts in which exposure to disciplinary writing is preliminary, the “boundaries we imagine are fuzzy” (377). They posit that it is also possible that curricula may offer students mainly “low-road” opportunities for adaptation or transformation of learned strategies (377). The authors stress that in this study, they were limited to “what the texts tell us” and thus could not speak to students’ reasons for their decisions (376).

Questions for future research, they suggest, include whether students are aware of deliberate reuse of strategies and whether or not “students reusing features do so automatically or purposefully” (377). Research might link student work to particular students with identifiers that would enable follow-up investigation.

They argue that compared to the methods of textual analysis and “topic-modeling” their study employs, “current assessment methods . . . are crude in their construct representation and antiquated in the information they provide” (378). They call for “a new program of research” that exploits a new

capability to code through automated processes and allow large corpora of data to be uploaded and analyzed rapidly under principled categories of analysis. 378

 


Leave a comment

Salig et al. Student Perceptions of “Essentialist Language” in Persuasive Writing. J of Writ. Res., 2018. Posted 05/10/2018.

Salig, Lauren K., L. Kimberly Epting, and Lizabeth A. Rand. “Rarely Say Never: Essentialist Rhetorical Choices in College Students’ Perceptions of Persuasive Writing.” Journal of Writing Research 93.3 (2018): 301-31. Web. 3 May 2018.

Lauren K. Salig, L. Kimberly Epting, and Lizabeth A. Rand investigated first-year college students’ perceptions of effective persuasive writing. Triggered by ongoing research that suggests that students struggle with the analytical and communicative skills demanded by this genre, the study focused on students’ attitudes toward “essentialist” language in persuasive discourse.

The authors cite research indicating that “one-sided” arguments are less persuasive than those that acknowledge opposing views and present more than one perspective on a issue (303); they posit that students’ failure to develop multi-sided arguments may account for assessments showing poor command of persuasive writing (303). Salig et al. argue that “the language used in one-sided arguments and the reasons students might think one-sidedness benefits their writing have not been extensively evaluated from a psychological perspective” (304). Their investigation is intended both to clarify what features students believe contribute to good persuasive writing and to determine whether students actually apply these beliefs in identifying effective persuasion (305).

The authors invoke a term, “essentialism,” to encompass different forms of language that exhibit different levels of “black-and-white dualism” (304). Such language may fail to acknowledge exceptions to generalizations; one common way it may manifest itself is the tendency to include “boosters” such as ‘“always,’ ‘every,’ and ‘prove,’” while eliminating “hedges” such as qualifiers (304). “Essentialist” thinking, the authors contend, “holds that some categories have an unobservable, underlying ‘essence’ behind them” (304). Salig et al. argue that while some subsets of “generic language” may enable faster learning because they allow the creation of useful categories, the essentialist tendency in such language to override analytical complexity can prove socially harmful (305).

The investigation involved two studies designed, first, to determine whether students conceptually recognized the effects of essentialist language in persuasive writing, and second, to assess whether they were able to apply this recognition in practice (306).

Study 1 consisted of two phases. In the first, students were asked to generate features that either enhanced or detracted from the quality, persuasiveness, and credibility of writing (307). Twenty-seven characteristics emerged after coding; these were later reduced to 23 by combining some factors. Features related to essentialism, Bias and One-sidedness, were listed as damaging to persuasiveness and credibility, while Refutation of Opposition and Inclusion of Other Viewpoints were seen as improving these two factors. Although, in the authors’ view, these responses aligned with educational standards such as the Common Core State Standards, students did not see these four characteristics as affecting the quality of writing (309).

In Phase 2 of Study 1, students were prompted to list “writing behaviors that indicated the presence of the specified characteristic” (310). The researchers developed the top three behaviors for each feature into sentence form; they provide the complete list of these student-generated behavioral indicators (311-14).

From the Study 1 results, Salig et al. propose that students do conceptually grasp “essentialism” as a negative feature and can name ways that it may show up in writing. Study 2 was designed to measure the degree to which this conceptual knowledge influences student reactions to specific writing in which the presence or absence of essentialist features becomes the variable under examination (314-15).

In this study 79 psychology students were shown six matched pairs of statements, varying only in that one represented essentialist language and the other contained hedges and qualifiers (315). In each case, participants were asked to state which of the two statements was “better,” and then to refer to a subset of the 23 features identified in Study 1 that had been narrowed to focus on persuasiveness in order to provide reasons for their preference (316). They were asked to set aside their personal responses to the topic (318). The researchers provide the statement pairs, three of which contained citations (317-18).

In Likert-scale responses, the students generally preferred the non-essentialist samples (319), although the “driving force” for this finding was that students preferring non-essentialist samples rated the essentialist samples very low in persuasiveness (323). Further, of the 474 choices, 222 indicated that essentialist examples were “better,” while 252 chose the non-essentialist examples, a difference that the researchers report as not significant (321).

Salig et al. find that the reasons students chose for preferring essentialist language differed from their reasons for preferring non-essentialist examples. Major reasons for the essentialist choice were Voice/Tone, Concision, Persuasive Effectiveness, One-sidedness, and Grabs/Retains Attention. Students who chose non-essentialist samples as better cited Other Viewpoints, Argument Clarity/Consistency, Detail, Writer’s Knowledge, Word Choice/Language, and Bias (322).

Participants were divided almost equally among those who consistently chose non-essentialist options, those who consistently chose essentialist options, and those whose chose one or the other half of the time (323). Correlations indicated that students who were somewhat older (maximum age was 21, with M = 18.49 years) “were associated with lower persuasiveness ratings on essentialist samples than younger students or students with less education” (324). The authors posit that the second study examined a shift from “conceptual to operational understanding” (324) and thus might indicate the effects either of cognitive development or increased experience or some combination in conjunction with other factors (325).

In addition, the authors consider effects of current methods of instruction on students’ responses to the samples. They note that “concision” showed up disproportionately as a reason given by students who preferred essentialist samples. They argue that possibly students have inferred that “strong, supported, and concise arguments” are superior (326). Citing Linda Adler-Kassner, they write that students are often taught to support their arguments before they are encouraged to include counterarguments (326).The authors recommend earlier attention, even before high school, to the importance of including multiple viewpoints (328).

The study also revealed an interaction between student preferences and the particular sets, with sets 4 and 5 earning more non-essentialist votes than other sets. The length of the samples and the inclusion of citations in set 4 lead the researchers to consider whether students perceived these as appropriate for “scholarly” or more formal contexts in comparison to shorter, more emphatic samples that students may have associated with “advertising” (327). Sets 4 and 5 also made claims about “students” and “everybody,” prompting the researchers to suggest that finding themselves the subjects of sweeping claims may have encouraged students to read the samples with more awareness of essentialist language (327).

The authors note that their study examined “one, and arguably the simplest, type” of essentialist language. They urge ongoing research into the factors that enable students not just to recognize but also to apply the concepts that characterize non-essentialist language (328-29).

 


Leave a comment

Carter and Gallegos. Assessing Celebrations of Student Writing. CS, Spring 2017. Posted 09/03/2017.

Carter, Genesea M., and Erin Penner Gallegos. “Moving Beyond the Hype: What Does the Celebration of Student Writing Do for Students?” Composition Studies 45.1 (2017): 74-98. Web. 29 Aug. 2017.

Genesea M. Carter and Erin Penner Gallegos present research on “celebrations of student writing (CSWs)” (74), arguing that while extant accounts of these events portray them as positive and effective additions to writing programs, very little research has addressed students’ own sense of the value of the CSW experience. To fill this gap, Carter and Gallegos interviewed 23 students during a CSW at the University of New Mexico (UNM) and gathered data from an anonymous online survey (84).

As defined by Carter and Gallegos, a CSW asks students to represent the writing from their coursework in a public forum through posters and art installations (77). Noting that the nature of a CSW is contingent on the particular institution at which it takes place (75, 91), the authors provide specific demographic data about UNM, where their research was conducted. The university is both a “federally designated Hispanic Serving Institution (HSI)” and “a Carnegie-designated very high research university” (75), thus incorporating research-level expectations with a population of “historically marginalized,” “financially very needy” students with “lower educational attainment” (76). Carter and Gallegos report on UNM’s relatively low graduation rates as compared to similar universities and the “particular challenges” faced by this academic community (76).

Among these challenges, in the authors’ view, was a “negative framing of the student population from the university community and city residents” (76). Exposure in 2009 via a meeting with Linda Adler-Kassner to the CSW model in place at Eastern Michigan University led graduate students Carter and Gallegos to develop a similar program at UNM (76-77). Carter and Gallegos were intrigued by the promise of programs like the one at EMU to present a new, positive narrative about students and their abilities to the local academic and civic communities.

They recount the history of the UNM CSW as a project primarily initiated by graduate students that continues to derive from graduate-student interests and participation while also being broadly adopted by the larger university and in fact the larger community (78, 92). In their view, the CSW differs from other institutional showcases of student writing such as an undergraduate research day and a volume of essays selected by judges in that it offers a venue for “students who lack confidence in their abilities or who do not already feel that they belong to the university community” (78). They argue that changing the narrative about student writing requires a space for recognizing the strengths of such historically undervalued students.

Examining CSWs from a range of institutions in order to discover what the organizers believe these events achieve, the authors found “a few commonalities” (79). Organizers underscored their belief that the audience engagement offered by a CSW enforced the nature of writing as “social, situational, and public,” a “transactional” experience rather than the “one-dimensional” model common in academic settings (80). Further, CSWs are seen to endorse student contributions to research across the university community and to inspire recognition of the multiple literacies that students bring to their academic careers (81). The authors’ review also reveals organizers’ beliefs that such events will broaden students’ understanding of the writing process by foregrounding how writing evolves through revision into different modes (81).

An important thread is the power of CSWs to enhance students’ “sense of belonging, both to an intellectual and a campus community” (82). Awareness that their voices are valued, according to the authors’ research, is an important factor in student persistence among marginalized populations (81). Organizers see CSWs as encouraging students to see themselves as “authors within a larger community discourse” (83).

Carter and Gallegos note a critique by Mark Mullen, who argues that CSWs can actually exploit student voices in that they may actually be a “celebration of the teaching of writing, a reassertion of agency by practitioners who are routinely denigrated” (qtd. in Carter and Gallegos 84). The authors find from their literature review that, indeed, few promotions of CSWs in the literature include student voices (84). They contend that their examination of student perceptions of the CSW process can further understanding of the degree to which these events meet their intended outcomes (84).

Their findings support the expectation that students will find the CSW valuable, but discovered several ways in which the hopes of supporters and the responses of students are “misaligned” (90). While the CSW did contribute to students’ sense of writing as a social process, students expressed most satisfaction in being able to interact with their peers, sharing knowledge and experiencing writing in a new venue as fun (86). Few students understood how CSW connected to the goals of their writing coursework, such as providing a deeper understanding of rhetorical situation and audience (87). While students appreciated the chance to “express” their views, the authors write that students “did not seem to relate expression to being heard or valued by the academic community” or to “an extension of agency” (88).

For the CSW to more clearly meet its potential, the authors recommend that planners at all levels focus on building metacognitive awareness of the pedagogical value of such events through classroom activities (89). Writing programs involved in CSWs, according to the authors, can develop specific outcomes beyond those for the class as a whole that define what supporters and participants hope the event will achieve (89-90). Students themselves should be involved in planning the event as well as determining its value (90), with the goal of “emphasizing to their student participants that the CSW is not just another fun activity but an opportunity to share their literacies and voices with their classmates and community” (90).

A more detailed history of the development of the UNM event illustrates how the CSW became increasingly incorporated into other university programs and how it ultimately drew participation from local artists and performers (92-93). The authors applaud this “institutionalizing” of the event because such broad interest and sponsorship mean that the CSW can continue to grow and spread knowledge of student voices to other disciplines and across the community (93).

They see “downsides” in this expansion in that the influence of different sponsors from year to year and attachment to initiatives outside of writing tends to separate the CSW from the writing courses it originated to serve. Writing programs in venues like UNM may find it harder to develop appropriate outcomes and assess results, making sure that the CSW remains a meaningful part of a writing program’s mission (93). The authors recommend that programs hoping that a CSW will enhance actual writing instruction should commit adequate resources and attention to the ongoing events. The authors write that, “imperatively,” student input must be part of the process in order to prevent such events from “becom[ing] merely another vehicle for asserting the value of the teaching of writing” (94; emphasis original).


1 Comment

Blythe and Gonzales. Using Screencast Videos to Capture What Students Do. June CCC. Posted 09/08/2016.

Blythe, Stuart, and Laura Gonzales. “Coordination and Transfer across the Metagenre of Secondary Research.” College Composition and Communication 67.4 (2016): 607-33. Print.

Stuart Blythe and Laura Gonzales describe a study of students’ writing practices using screencast videos to record their activities. They hoped to shed light on the question of whether students “transfer” their learning in first-year writing classes to other contexts.

Five researchers recruited students from multiple sections of a cross-disciplinary biology course that met a university-wide requirement (610). Coordinating with the professor in charge of a large lecture section, the researchers distributed index cards to students in the smaller discussion sections, instructing students willing to participate in the study to provide contact information (612). Ultimately twelve students agreed to take part (613).

Blythe and Gonzales review studies by multiple scholars that find little or no evidence of transfer of first-year writing content, supporting Doug Brent’s “glass half-empty” interpretation of the issue of transfer (608). Along with Elizabeth Wardle, as well as Linda Adler-Kassner, John Majewski, and Damian Koshnick, Blythe and Gonzales posit that the learning involved in writing is difficult to research because it is non-linear and, according to Joseph Petraglia, does not yield to “‘well-structured’ formulas or algorithms” (qtd. in Blythe and Gonzales 608). The authors also propose that researchers may be handicapped by their use of “a limited set of methods” such as interviews and focus groups (608).

Blythe and Gonzales contend that their use of screen-capture technology improves on interviews because, unlike an interview, this method does not rely on memory or the interaction between the interviewer and interviewee but rather reveals what actually happens “in that moment” of actual composition (Raul Sanchez, qtd. in Blythe and Gonzales 613). The authors also state that, unlike think-aloud protocols, screencast videos do not add an unfamiliar, distracting element to students’ processes; they note that many students “reported forgetting that their work was being recorded” (614).

Students were instructed to upload three fifteen-minute videos over the course of their composition process (613). Each student then joined a researcher in an “artifact-based interview” designed to overcome the failure of the screencast process to record the student’s reasons for various choices (614).

In choosing the biology course for study, the researchers expected to analyze genres such as lab reports, but were surprised to find that “students were being asked to write arguments using published sources,” specifically involving the use of DDT to control malaria (610).

Citing Michael Carter’s use of the term metagenre to denote “ways of knowing and doing that cross disciplines” (610; emphasis original), Blythe and Gonzales locate the biology assignment in such a metagenre. Following Carter, the authors distinguish between “knowing that,” which designates “unique sets of knowledge” specific to each discipline, and “knowing how,” indicating “share[d] ways of knowing” (610). In this view, these “ways of knowing” constitute metagenres (610). Four metagenres listed by Carter are problem-solving, empirical inquiry, research from sources, and performance (610-11). The biology assignment falls into the cross-disciplinary metagenre of “research from sources.”

The software allowed the researchers to code thirty-six videos capturing student composing processes and to generate “visualizations” or graphs that recorded student movement among the texts they worked with as they wrote (614. 616). Major patterns in student processes emerged from this coding and from the interviews in which students affirmed the categorizations recognized by the researchers’ analysis (614).

Three major conclusions resulted. First, “[s]tudents select sources rhetorically” (615). Specifically, students chose sources that they thought would meet their instructors’ approval (622). Although they used Google to generate ideas and plan, they cited only information from library databases and Google Scholar, as specified by the assignment (623). In conducting searches, the students did not venture beyond the first entry in a results list and thus often cited the same sources (624).

The authors remark that:

Students were not concerned with learning about DDT and malaria as intended by the assignment guidelines. Instead, students used sources to constantly ensure they were meeting the assignment requirements in a way that would please their instructor. (622-23)

Second, “[s]tudents coordinate multiple texts” (615). The screen captures revealed that students moved rapidly among six different kinds of texts, for example, from their drafts to websites found on Google to the assignment rubric (618). They spent an average of 12.14 seconds on each type of text (619). The preferred process was to paste text from sources into the paper, reword it, then cite, resulting, in one student’s example, in the construction of the paper “sentence by sentence” through the search for “necessary piece[s] of information” (620).

While Blythe and Gonzales agree that pasting and rewording might constitute what Rebecca Moore Howard terms “patchwriting,” they contend that using what Shaun Slattery refers to as “textual coordination” to “find bits of text from multiple sources and rework them into a new text designed for a particular purpose” resembles the process followed by professional writers engaging with a topic on which they lack expertise (627). They cite Howard’s claim that this writing technique can be useful in “finding a way into” a new discourse (qtd. in Blythe and Gonzales 627). The authors argue, though, that students lack the social and professional networks that scholars like Stacey Pigg and Jason Swarts find underpinning the work of professional writers. Students relied on the assignment rubric for their understanding of the purpose and possibilities of their task (628).

Third, students do not generally credit their college writing courses for teaching them the skills they deem important in crafting a paper. All students in the study stated that they used strategies learned in high school; Blythe and Gonzales found that many relied on “adjusting the same basic structures” like the five-paragraph theme (625). First-year writing, according to this study, served as “another space in which they get to practice the writing strategies they learned earlier in their academic careers” (626). Such practice, students seemed to believe, contributed to improvements in their writing, but interviews suggested that their sense of how these improvements occurred was vague (626).

The authors close with recommendations that first-year-writing instruction can usefully focus on “expand[ing] the resources and networks” that can contribute to students’ writing processes, introducing them to “specialized communities or connections” (629). Blythe and Gonzales further suggest that transfer studies might attend more carefully to what Elizabeth Wardle calls “meta-awareness” about writing, particularly awarenesses that students bring to writing classes from prior experience (630).

 

 

 


Leave a comment

Del Principe and Ihara. Reading at a Community College. TETYC, Mar. 2016. Posted 04/10/2016.

Del Principe, Annie, and Rachel Ihara. “‘I Bought the Book and I Didn’t Need It’: What Reading Looks Like at an Urban Community College.” Teaching English in the Two-Year College 43.3 (2016): 229-44. Web. 10 Mar. 2016.

Annie Del Principe and Rachel Ihara conducted a qualitative study of student reading practices at Kingsborough Community College, CUNY. They held interviews and gathered course materials from ten students over the span of the students’ time at the college between fall 2011 and fall 2013, amassing “complete records” for five (231). They found a variety of definitions of acceptable reading practices across disciplines; they urge English faculty to recognize this diversity, but they also advocate for more reflection from faculty in all academic subject areas on the purposes of the reading they assign and how reading can be supported at two-year colleges (242).

Four of the five students who were intensively studied placed into regular first-year composition and completed Associates’ degrees while at Kingsborough; the fifth enrolled in a “low-level developmental writing class” and transferred to a physician’s assistant program at a four-year institution in 2015 (232). The researchers’ inquiry covered eighty-three different courses and included twenty-three hours of interviews (232).

The authors’ review of research on reading notes that many different sources across institutions and disciplines see difficulty with reading as a reason that students often struggle in college. The authors recount a widespread perception that poor preparation, especially in high school, and students’ lack of effort is to blame for students’ difficulties but contend that the ways in which faculty frame and use reading also influence how students approach assigned texts (230). Faculty, Del Principe and Ihara write, often do not see teaching reading as part of their job and opt for modes of instruction that convey information in ways that they perceive as efficient, such as lecturing extensively and explaining difficult texts rather than helping students work through them (230).

A 2013 examination of seven community colleges in seven states by the National Center for Education and the Economy (NCEE) reported that the kinds of reading and writing students do in these institutions “are not very cognitively challenging”; don’t require students “to do much” with assigned reading; and demand “performance levels” that are only “modest” (231). This study found that more intensive work on analyzing and reflecting on texts occurred predominately in English classes (231). The authors argue that because community-college faculty are aware of the problems caused by reading difficulties, these faculty are “constantly experimenting” with strategies for addressing these problems; this focus, in the authors’ view, makes community colleges important spaces for investigating reading issues (231).

Del Principe and Ihara note that in scholarship by Linda Adler-Kassner and Heidi Estrem and by David Jolliffe as well as in the report by NCEE, the researchers categorize the kinds of reading students are asked to do in college (232-33). The authors state that their “grounded theory approach” (232) differs from the methods in these works in that they

created categories based on what students said about how they used reading in their classes and what they did (or didn’t do) with the assigned reading rather than on imagined ways of reading or what was ostensibly required by the teacher or by the assignment. (233).

This methodology produced “five themes”:

  • “Supplementing lecture with reading” (233). Students reported this activity in 37% of the courses examined, primarily in non-English courses that depended largely on lecture. Although textbooks were assigned, students received most of the information in lectures but turned to reading to “deepen [their] understanding ” or for help if the lecture proved inadequate in some way (234).
  • “Listening and taking notes as text” (233). This practice, encountered in 35% of the courses, involved situations in which a textbook or other reading was listed on the syllabus but either implicitly or explicitly designated as “optional.” Instructors provided handouts or PowerPoint outlines; students combined these with notes from class to create de facto “texts” on which exams were based. According to Del Principe and Ihara, “This marginalization of long-form reading was pervasive” (235).
  • “Reading to complete a task” (233). In 24% of the courses, students reported using reading for in-class assignments like lab reports or quizzes; in one case, a student described a collaborative group response to quizzes (236). Other activities included homework such as doing math problems. Finally, students used reading to complete research assignments. The authors discovered very little support for or instruction on the use and evaluation of materials incorporated into research projects and posit that much of this reading may have focused on “dubious Internet sources” and may have included cut-and-paste (237).
  • “Analyzing text” (233). Along with “reflecting on text,” below, this activity occurred “almost exclusively” in English classes (238). The authors describe assignments calling for students to attend to a particular line or idea in a text or to compare themes across texts. Students reported finding “on their own” that they had to read more slowly and carefully to complete these tasks (238).
  • “Reflecting on text” (233). Only six of the 83 courses asked students to “respond personally” to reading; only one was not an English course (239). The assignments generally led to class discussion, in which, according to the students, few class members participated, possibly because “Nobody [did] the reading” (student, qtd. in Del Principe and Ihara 239; emendation original).

Del Principe and Ihara focus on the impact of instructors’ “following up” on their assignments with activities that “require[d] students to draw information or ideas directly from their own independent reading” (239). Such follow-up surfaced in only fourteen of the 83 classes studied, with six of the fourteen being English classes. Follow-up in English included informal responses and summaries as well as assigned uses of outside material in longer papers, while in courses other than English, quizzes or exams encouraged reading (240). The authors found that in courses with no follow-up, “students typically did not do the reading” (241).

Del Principe and Ihara acknowledge that composition professionals will find the data “disappointing,” but feel that it’s important not to be misdirected by a “specific disciplinary lens” into dismissing the uses students and other faculty make of different kinds of reading (241). In many classes, they contend, reading serves to back up other kinds of information rather than as the principle focus, as it does in English classes. However, they do ask for more reflection across the curriculum. They note that students are often required to purchase expensive books that are never used. They hope to trigger an “institutional inquiry” that will foster more consideration of how instructors in all fields can encourage the kinds of reading they want students to do (242).