College Composition Weekly: Summaries of research for college writing professionals

Read, Comment On, and Share News of the Latest from the Rhetoric and Composition Journals


Kolln and Hancock. Histories of U. S. Grammar Instruction. English Teaching: Practice and Critique (NZ), 2005. Posted 04/22/2018.

Kolln, Martha, and Craig Hancock. “The Story of English Grammar in United States Schools.” English Teaching: Practice and Critique 4.3 (2005): 11-31. Web. 4 Mar. 2018.

Martha Kolln and Craig Hancock, publishing in English Teaching: Practice and Critique in 2005, respond in parallel essays to what they consider the devaluation of grammar teaching in United States schools and universities. English Teaching: Practice and Critique is a publication of Waikato University in New Zealand. The two essays trace historical developments in attitudes toward grammar education in U. S. English language curricula.

Kolln’s essay reports on a long history of uncertainty about teaching grammar in United States classrooms. Noting that confusion about the distinction between “grammar” and “usage” pervaded discussions since the beginning of the Twentieth Century, Kolln cities studies from 1906 and 1913 to illustrate the prevalence of doubts that the time needed to teach grammar was justified in light of the many other demands upon public-school educators (13).

Citing Richard Braddock, Richard Lloyd-Jones, and Lowell Schoer’s 1963 Research in Written Composition to note that “early research in composition and grammar was not highly developed” (13), Kolln argues that the early studies were flawed (14). A later effort to address grammar teaching, An Experience Curriculum in English, was advanced by a 1936 National Council of Teachers of English (NCTE) committee; this program, Kolln writes, “recommended that grammar be taught in connection with writing, rather than as an isolated unit of study” (14). She contends that the effort ultimately failed because teachers did not accept its focus on “functional grammar” in place of “the formal method [they] were used to” (14).

In Kolln’s history, the hiatus following this abortive project ended with the advent of structural linguistics in the 1950s. This new understanding of the workings of English grammar was originally received enthusiastically; Harold B. Allen’s 1958 Readings in Applied English Linguistics drew on nearly 100 articles, including many from NCTE (12). This movement also embraced Noam Chomsky’s 1957 Syntactic Structures; the NCTE convention in 1963 featured “twenty different sessions on language, . . . with 50 individual papers” under categories like “Semantics,” “Structural Linguistics for the Junior High School,” and “the Relationship of Grammar to Composition” (14-15).

Excitement over such “new grammar” (15), however, was soon “swept aside” (12). Kolln posits that Chomsky’s complex generative grammar, which was not meant as a teaching tool, did not adapt easily to the classroom (15). She traces several other influences supporting the continued rejection of grammar instruction. Braddock et al. in 1963 cited a study by Roland Harris containing “serious flaws,” according to two critics who subsequently reviewed it (16). This study led Braddock et al. to state that grammar instruction not only did not improve student writing, it led to “a harmful effect” (Braddock et al., qtd. in Kolln and Hancock 15). Kolln reports that this phrase is still referenced to argue against teaching grammar (15).

Other influences on attitudes toward grammar, for Kolln, include the advent of “student-centered” teaching after the Dartmouth seminar in 1966 , the ascendancy of the process movement, and a rejection of “elitist” judgments that denigrated students’ home languages (16-17). As a result of such influences and others, Kolln writes, “By 1980, the respected position that grammar had once occupied was no longer recognized by NCTE” (17).

Addressing other publications and position statements that echo this rejection of grammar instruction, Kolln writes that teacher education, in particular, has been impoverished by the loss of attention to the structure of language (19). She contends that “[t]he cost to English education of the NCTE anti-grammar policy is impossible to calculate” (19).

She sees shifts toward an understanding of grammar that distinguishes it from rote drill on correctness in the creation of an NCTE official assembly, The Assembly for the Teaching of English Grammar (ATEG). Several NCTE publications have forwarded the views of this group, including the book Grammar Alive! A Guide for Teachers, and articles in English Journal and Language Arts (20). Kolln urges that grammar, properly understood, be “seen as a legitimate part of the Language Arts curriculum that goes beyond an aid to writing” (20).

Hancock frames his discussion with a contemporaneous article by R. Hudson and J. Walmsley about trends in grammar instruction in the U.K. He sees a consensus among educators in England that “an informed understanding of language and an appropriate metalanguage with which to discuss it” are important elements of language education (qtd. in Kolln and Hancock 21). Further, this consensus endorses a rejection of “the older, dysfunctional, error-focused, Latin-based school grammar” (21-22).

In his view, the grounds for such widespread agreement in the United States, rather than encouraging an appreciation of well-designed grammar instruction, in fact lead away from the possibility of such an appreciation (22-23). He sees a U. S. consensus through the 1960s that literature, especially as seen through New Criticism, should be the principle business of English instruction. The emphasis on form, he writes, did not embrace linguistic theory; in general, grammar was “traditional” if addressed at all, and was seen as the responsibility of elementary schools (22). Literature was displaced by Critical Theory, which challenged the claim that “there is or should be a monolithic, central culture or a received wisdom” in the valuation of texts (22).

Similarly, he maintains that the advent of composition as a distinct field with its focus on “what writers actually do when they write” led to studies suggesting that experienced writers saw writing as meaning-making while inexperienced writers were found to, in Nancy Sommers’s words, “subordinate the demands of the specific problems of the text to the demands of the rules” (qtd. in Kolln and Hancock 23). Downplaying the rules, in this view, allowed students to engage more fully with the purposes of their writing.

In Hancock’s view, language educators in the U.S. distanced themselves from grammar instruction in their focus on “‘empowerment’ in writing” in order to address the needs of more diverse students (24). This need required a new acknowledgment of the varying contexts in which language occurred and an effort to value the many different forms language might take. Recognition of the damage done by reductive testing models also drove a retreat from a grammar defined as “policing people’s mistakes” (24-25).

Hancock argues that the public arena in which students tend to be judged does not allow either correctness or grammar to “simply be wished away” (25). He suggests that the “minimalist” theories of Constance Weaver in the 1990s and linguists like Steven Pinker are attempts to address the need for students to meet some kinds of standards, even though those standards are often poorly defined. These writers, in Hancock’s reading, contend that people learn their native grammars naturally and need little intervention to achieve their communicative goals (25, 27).

Hancock responds that a problem with this approach is that students who do not rise to the expected standard are blamed for their “failure to somehow soak it up from exposure or from the teacher’s non-technical remarks” (25). Hancock laments the “progressive diminution of knowledge” that results when so many teachers themselves are taught little about grammar (25): the lack of a “deep grounding in knowledge of the language” means that “[e]diting student writing becomes more a matter of what ‘feels right’” (26).

As a result of this history, he contends, “language-users” remain “largely unconscious of their own syntactic repertoire” (26), while teachers struggle with contradictory demands with so little background that, in Hancock’s view, “they are not even well-equipped to understand the nature of the problem” (29). He faults linguists as well for debunking prescriptive models while failing to provide “a practical alternative” (26).

Hancock presents a 2004 piece by Laura Micciche as a “counter-argument to minimalist approaches” (28). Hancock reads Micciche to say that there are more alternatives to the problems posed by grammatical instruction than outright rejection. He interprets her as arguing that a knowledge of language is “essential to formation of meaning” (28):

We need a discourse about grammar that does not retreat from the realities we face in the classroom—a discourse that takes seriously the connection between writing and thinking, the interwoven relationship between what we say and how we say it. (Micciche, qtd. in Kolln and Hancock 28)

Hancock deplores the “vacuum” created by the rejection of grammar instruction, a undefended space into which he feels prescriptive edicts are able to insert themselves (28, 29). Like Kolln, he points to ATEG, which in 2005-2006 was working to shift NCTE’s “official position against the teaching of formal grammar” (28). Hancock envisions grammar education that incorporates “all relevant linguistic grammars” and a “thoughtfully selected technical terminology” (28), as well as an understanding of the value of home languages as “the foundation for the evolution of a highly effective writing voice” (29). Such a grammar, he maintains, would be truly empowering, promoting an understanding of the “connection between formal choices and rhetorical effect” (26).

Click to access 2005v4n3art1.pdf

 


Bowden, Darsie. Student Perspectives on Paper Comments. J of Writing Assessment, 2018. Posted 04/14/2018.

Bowden, Darsie. “Comments on Student Papers: Student Perspectives.” Journal of Writing Assessment 11.1 (2018). Web. 8 Apr. 2018.

Darsie Bowden reports on a study of students’ responses to teachers’ written comments in a first-year writing class at DePaul University, a four-year, private Catholic institution. Forty-seven students recruited from thirteen composition sections provided first drafts with comments and final drafts, and participated in two half-hour interviews. Students received a $25 bookstore gift certificate for completing the study.

Composition classes at DePaul use the 2000 version of the Council of Writing Program Administrators’ (WPA) Outcomes to structure and assess the curriculum. Of the thirteen instructors whose students were involved in the project, four were full-time non-tenure track and nine were adjuncts; Bowden notes that seven of the thirteen “had graduate training in composition and rhetoric,” and all ”had training and familiarity with the scholarship in the field.” All instructors selected were regular attendees at workshops that included guidance on responding to student writing.

For the study, instructors used Microsoft Word’s comment tool in order to make student experiences consistent. Both comments and interview transcripts were coded. Comment types were classified as “in-draft” corrections (actual changes made “in the student’s text itself”); “marginal”; and “end,” with comments further classified as “surface-level” or “substance-level.”

Bowden and her research team of graduate teaching assistants drew on “grounded theory methodologies” that relied on observation to generate questions and hypotheses rather than on preformed hypotheses. The team’s research questions were

  • How do students understand and react to instructor comments?
  • What influences students’ process of moving from teacher comments to paper revision?
  • What comments do students ignore and why?

Ultimately the third question was subsumed by the first two.

Bowden’s literature review focuses on ongoing efforts by Nancy Sommers and others to understand which comments actually lead to effective revision. Bowden argues that research often addresses “the teachers’ perspective rather than that of their students” and that it tends to assess the effectiveness of comments by how they “manifest themselves in changes in subsequent drafts.” The author cites J. M. Fife and P. O’Neill to contend that the relationship between comments and effects in drafts is not “linear” and that clear causal connections may be hard to discern. Bowden presents her study as an attempt to understand students’ actual thinking processes as they address comments.

The research team found that on 53% of the drafts, no in-draft notations were provided. Bowden reports on variations in length and frequency in the 455 marginal comments they examined and as well as in the end comments that appeared in almost all of the 47 drafts. The number of substance-level comments exceeded that of surface-level comments.

Her findings accord with much research in discovering that students “took [comments] seriously”; they “tried to understand them, and they worked to figure out what, if anything, to do in response.” Students emphasized comments that asked questions, explained responses, opened conversations, and “invited them to be part of the college community.” Arguing that such substance-level comments were “generative” for students, Bowden presents several examples of interview exchanges, some illustrating responses in which the comments motivated the student to think beyond the specific content of the comment itself. Students often noted that teachers’ input in first-year writing was much more extensive than that of their high school teachers.

Concerns about “confusion” occurred in 74% of the interviews. Among strategies for dealing with confusion were “ignor[ing] the comment completely,” trying to act on the comment without understanding it, or writing around the confusing element by changing the wording or structure. Nineteen students “worked through the confusion,” and seven consulted their teachers.

The interviews revealed that in-class activities like discussion and explanation impacted students’ attempts to respond to comments, as did outside factors like stress and time management. In discussions about final drafts, students revealed seeking feedback from additional readers, like parents or friends. They were also more likely to mention peer review in the second interview; although some mentioned the writing center, none made use of the writing center for drafts included in the study.

Bowden found that students “were significantly preoccupied with grades.” As a result, determining “what the teacher wants” and concerns about having “points taken off” were salient issues for many. Bowden notes that interviews suggested a desire of some students to “exert their own authority” in rejecting suggested revisions, but she maintains that this effort often “butts up against a concern about grades and scores” that may attenuate the positive effects of some comments.

Bowden reiterates that students spoke appreciatively of comments that encouraged “conversations about ideas, texts, readers, and their own subject positions as writers” and of those that recognized students’ own contributions to their work. Yet, she notes, the variety of factors influencing students’ responses to comments, including, for example, cultural differences and social interactions in the classroom, make it difficult to pinpoint the most effective kind of comment. Given these variables, Bowden writes, “It is small wonder, then, that even the ‘best’ comments may not result in an improved draft.”

The author discusses strategies to ameliorate the degree to which an emphasis on grades may interfere with learning, including contract grading, portfolio grading, and reflective assignments. However, she concludes, even reflective papers, which are themselves written for grades, may disguise what actually occurs when students confront instructor comments. Ultimately Bowden contends that the interviews conducted for her study contain better evidence of “the less ‘visible’ work of learning” than do the draft revisions themselves. She offers three examples of students who were, in her view,

thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time.

She considers such activities “problem-solving” even though the problem could not be solved in time to affect the final draft.

Bowden notes that her study population is not representative of the broad range of students in writing classes at other kinds of institutions. She recommends further work geared toward understanding how teacher feedback can encourage the “habits of mind” denoted as the goal of learning by the2010 Framework for Success in Postsecondary Writing produced by the WPA, the National Council of Teachers of English, and the National Writing Project. Such understanding, she contends, can be effective in dealing with administrators and stakeholders outside of the classroom.


1 Comment

McAlear and Pedretti. When is a Paper “Done”? Comp. Studies, Fall 2016. Posted 03/02/2017.

McAlear, Rob, and Mark Pedretti. “Writing Toward the End: Students’ Perceptions of Doneness in the Composition Classroom.” Composition Studies 44.2 (2016): 72-93. Web. 20 Feb. 2017.

Rob McAlear and Mark Pedretti describe a survey to shed light on students’ conception of “doneness,” or when a piece of writing is completed.

McAlear and Pedretti argue that writing teachers tend to consider writing an ongoing process that never really ends. In their view, this approach values “process over product,” with the partial result that the issue of how a writing task reaches satisfactory completion is seldom addressed in composition scholarship (72). They contend that experienced writers acquire an ability “central to compositional practice” of recognizing that a piece is ready for submission, and writing instructors can help students develop their own awareness of what makes a piece complete.

A first step in this pedagogical process, McAlear and Pedretti write, is to understand how students actually make this decision about their college assignments (73). Their article seeks to determine what criteria students actually use and how these criteria differ as student writers move through different levels of college writing (73).

McAlear and Pedretti review the limited references to doneness in composition scholarship, noting that earlier resources like Erika Lindemann and Daniel Anderson’s A Rhetoric for Writing Teachers and Janet Emig’s work suggest that the most important factors are deadlines and a sense that the writer has nothing more to say. The authors find these accounts “unsatisfying” (74). Nancy Sommers, they state, recognizes that writing tasks do end but does not explore the criteria nor the “implications for those criteria” (75). Linda Flower and John R. Hayes, in their cognitive model, suggest that endings are determined by a writer’s “task representation,” with solution of a problem the supposed end point. Again, the authors find that knowing how writers “defin[e] a problem” does not explain how writers know they “have reached an adequate solution” (75).

One reason doneness has not been explicitly addressed, McAlear and Pedretti posit, is its possible relationship to “products” as the end of writing. Yet, they argue, “one of the implicit goals of teaching writing as a process is to get better products” (76). In their view, interrogating how writers come to regard their work as finished need not commit scholars to a “Big Theory” approach; “completion,” like process, can be rhetorically focused, responsive to specific audiences and purposes (76).

The authors surveyed 59 students in four first-year and four second-year writing courses at a Midwest research institution (78). The survey consisted of ten questions; analysis focused on the first two, asking about the student’s year and major, and on two questions, Q5 and Q10, that specifically asked how students decided a piece was finished. Question 5 was intended to elicit information about “a cognitive state,” whereas Question 10 asked about specific criteria (78).

Coding answers yielded three strategies: Internal, Criteria, and Process. “Internal” responses “linked to personal, emotional, or aesthetic judgments, such as feeling satisfied with one’s work or that the paper ‘flowed’” (79). Answers classified under “Criteria” referenced “empirical judgments of completion” such as meeting the requirements of the assignment (79). In “Process” answers, “any step in the writing process . . . was explicitly mentioned,” such as proofreading or peer review (79). McAlear and Pedretti coded some responses as combinations of the basic strategies, such as IP for “Internal-Process” or PC for “Process-Criteria” (80).

Survey responses indicated that first-year students tended to use a single strategy to determine doneness, with Internal or Process dominant. Nearly half of second-year students also used only one marker, but with a shift from Internal to Criteria strategies (79-80). Students responding to Question 10 claimed to use more than one strategy, perhaps because an intervening question triggered more reflection on their strategies (80). However, the authors were surprised that 33% of first-year students and 48% of second-year students did not mention Process strategies at all (80). Overall, first-year writers were more likely to report Internal or Process options, while second-year writers trended more to external Criteria (80-81).

McAlear and Pedretti found that for first-year students particularly, “Process” involved only “lower-order” strategies like proofreading (81). The authors recoded references to proofreading or correctness into a new category, “Surface.” With this revision, first-year students’ preference for Internal strategies “become even more prominent,” while second-year students’ use of Process strategies other than “Surface” was highlighted (82).

Study results do not support what McAlear and Pedretti consider a common perception that correctness and page length dictate students’ decisions about doneness (84). The authors posit that “students may be relying on equally simple, but qualitatively distinct, criteria” (84). First-year students commonly pointed to “proofreading and having nothing more to say,” while second-year students expressed concern with “meeting the criteria of the prompt” (84).

McAlear and Pedretti note that even among second-year students who had been exposed to more than one writing class, these responses indicate very little “awareness of rhetorical situation” (84). Although responding to the rhetorical situation of a college classroom, McAlear and Pedretti argue, second-year students interpret the actual expectations of a writing class simplistically (85). Considerations that writing teachers would hope for, like “Is this portion of my argument persuasive for my audience,” were completely missing (84). Moreover, many second-year students did not note Process at all, despite presumably having encountered the concept often (85).

McAlear and Pedretti propose that the shift away from Internal, affective markers to external, criteria-focused, albeit reductive, strategies may reflect a “loss of confidence” as students encountering unfamiliar discourses no longer trust their ability to judge their own success (85-86). The authors suggest that, because students cannot easily frame a rhetorical problem, “they do not know their endpoint” and thus turn to teachers for explicit instruction on what constitutes an adequate response (87).

For the authors, the moment when students move to external criteria and must articulate these criteria is an opportunity to introduce a vocabulary on doneness and to encourage attention to the different kinds of criteria suitable for different rhetorical contexts (88). Instructors can use reflective activities and examination of others’ decisions as revealed in their work to incorporate issues of doneness into rhetorical education as they explicitly provide a range of strategies, from internal satisfaction to genre-based criteria (88-89). Students might revise writing tasks for different genres and consider how, for example, completion criteria for an essay differ from those for a speech (90).

The authors propose that such attention to the question of doneness may shed light on problems like “writing anxiety,  procrastination, and even plagiarism” (84). Ultimately, they write, “knowing when to stop writing is a need that many of our students have, and one for which we have not yet adequately prepared them” (90).

 


2 Comments

Hansen et al. Effectiveness of Dual Credit Courses. WPA Journal, Spring 2015. Posted 08/12/15.

Hansen, Kristine, Brian Jackson, Brett C. McInelly, and Dennis Eggett. “How Do Dual Credit Students Perform on College Writing Tasks After They Arrive on Campus? Empirical Data from a Large-Scale Study.” Journal of the Council of Writing Program Administrators 38.2 (2015): 56-92). Print.

Kristine Hansen, Brian Jackson, Brett C. McInelly, and Dennis Eggett conducted a study at Brigham Young University (BYU) to determine whether students who took a dual-credit/concurrent-enrollment writing course (DC/CE) fared as well on the writing assigned in a subsequent required general-education course as students who took or were taking the university’s first-year-writing course. With few exceptions, Hansen et al. concluded that the students who had taken the earlier courses for their college credit performed similarly to students who had not. However, the study raised questions about the degree to which taking college writing in high school, or for that matter, in any single class, adequately meets the needs of maturing student writers (79).

The exigence for the study was the proliferation of efforts to move college work into high schools, presumably to allow students to graduate faster and thus lower the cost of college, with some jurisdictions allowing students as young as fourteen to earn college credit in high school (58). Local, state, and federal policy makers all support and even “mandate” such opportunities (57), with rhetorical and financial backing from organizations and non-profits promoting college credit as a boon to the overall economy (81). Hansen et al. express concern that no uniform standards or qualifications govern these initiatives (58).

The study examined writing in BYU’s “American Heritage” (AH) course. In this course, which in September 2012 enrolled approximately half of the first-year class, students wrote two 900-word papers involving argument and research. They wrote the first paper in stages with grades and TA feedback throughout, while they relied on peer feedback and their understanding of an effective writing process, which they had presumably learned in the first assignment, for the second paper (64). Hansen et al. provide the prompts for both assignments (84-87).

The study consisted of several components. Students in the AH course were asked to sign a consent form; those who did so were emailed a survey about their prior writing instruction. Of these, 713 took the survey. From these 713 students,189 were selected (60-61). Trained raters using a holistic rubric with a 6-point scale read both essays submitted by these 189 students. The rubric pinpointed seven traits: “thesis, critical awareness, evidence, counter-arguments, organization, grammar and style, sources and citations” (65). A follow-up survey assessed students’ experiences writing the second paper, while focus groups provided additional qualitative information. Hansen et al. note that although only eleven students participated in the focus groups, the discussion provided “valuable insights into students’ motivations for taking pre-college credit options and the learning experiences they had” (65).

The 189 participants fell into five groups: those whose “Path to FYW Credit” consisted of AP scores; those who received credit for a DC/CE option; those planning to take FYW in the future; those taking it concurrently with AH; and those who had taken BYU’s course, many of them in the preceding summer (61, 63). Analysis reveals that the students studied were a good match in such categories as high-school GPA and ACT scores for the full BYU first-year population (62). However, strong high-school GPAs and ACT scores and evidence of regular one-on-one interaction with instructors (71), coupled with the description of BYU as a “private institution” with “very selective admission standards” (63) indicate that the students studied, while coming from many geographic regions, were especially strong students whose experiences could not be generalized to different populations (63, 82).

Qualitative results indicated that, for the small sample of students who participated in the focus group, the need to “get FYW out of the way” was not the main reason for choosing AP or DC/CE options. Rather, the students wanted “a more challenging curriculum” (69). These students reported good teaching practices; in contrast to the larger group taking the earlier survey, who reported writing a variety of papers, the students in the focus group reported a “literature[-]based” curriculum with an emphasis on timed essays and fewer research papers (69). Quotes from the focus-group students who took the FYW course from BYU reveal that they found it “repetitive” and “a good refresher,” not substantially different despite their having reported an emphasis on literary analysis in the high-school courses (72). The students attested that the earlier courses had prepared them well, although some expressed concerns about their comfort coping with various aspects of the first-year experience (71-72).

Three findings invited particular discussion (73):

  • Regardless of the writing instruction they had received, the students differed very little in their performance in the American Heritage class;
  • In general, although their GPAs and test scores indicated that they should be superior writers, the students scored in the center of the 6-point rubric scale, below expectations;
  • Scores were generally higher for the first essay than for the second.

The researchers argue that the first finding does not provide definitive evidence as to whether “FYW even matters” (73). They cite research by numerous scholars that indicates that the immediate effects of a writing experience are difficult to measure because the learning of growing writers does not exhibit a “tidy linear trajectory” (74). The FYW experience may trigger “steps backward” (Nancy Sommers, qtd. in Hansen et al. 72). The accumulation of new knowledge, they posit, can interfere with performance. Therefore, students taking FYW concurrently with AH might have been affected by taking in so much new material (74), while those who had taken the course in the summer had significantly lower GPAs and ACT scores (63). The authors suggest that these factors may have skewed the performance of students with FYW experience.

The second finding, the authors posit, similarly indicates students in the early-to-middle stages of becoming versatile, effective writers across a range of genres. Hansen et al. cite research on the need for a “significant apprenticeship period” in writing maturation (76). Students in their first year of college are only beginning to negotiate this developmental stage.

The third finding may indicate a difference in the demands of the two prompts, a difference in the time and energy students could devote to later assignments, or, the authors suggest, the difference in the feedback built into the two papers (76-77).

Hansen et al. recommend support for the NCTE position that taking a single course, especially at an early developmental stage, does not provide students an adequate opportunity for the kind of sustained practice across multiple genres required for meaningful growth in writing (77-80). Decisions about DC/CE options should be based on individual students’ qualifications (78); programs should work to include additional writing courses in the overall curriculum, designing these courses to allow students to build on skills initiated in AP, DC/CE, and FYW courses (79).

They further recommend that writing programs shift from promising something “new” and “different” to an emphasis on the recursive, nonlinear nature of writing, clarifying to students and other stakeholders the value of ongoing practice (80). Additionally, they recommend attention to the motives and forces of the “growth industry” encouraging the transfer of more and more college credit to high schools (80). The organizations sustaining this industry, they write, hope to foster a more literate, capable workforce. But the authors contend that speeding up and truncating the learning process, particularly with regard to a complex cognitive task like writing, undercut this aim (81-82) and do not, in fact, guarantee faster graduation (79). Finally, citing Richard Haswell, they call for more empirical, replicable studies of phenomena like the effects of DC/CE courses in order to document their impact across broad demographics (82).